var/home/core/zuul-output/0000755000175000017500000000000015146714475014543 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015146720406015476 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000227775515146720235020304 0ustar corecoreikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9Gf􅔟Eڤ펯_ˎ6Ϸ7+%f?aF޽}zi^|1Fr_?c^*߶E٬:rv筼ح_y~̎+\/_p/Bj^ֻ]Eo^O/(_/V?,<']_kmN:`S2^G) u.`l(Sm&F4a0>eBmFR5]!PI6f٘"y/(":[#;`1}+7 s'ϨF&%8'# $9b"r>B)GF%\bi/ Ff/Bp 4YH~BŊ6EZ|^߸3%L[EC 7gg/碓@e=Vn)h\\lwCzDiQJxTsL] ,=M`nͷ~Vܯ5n|X&pNz7l9HGAr Mme)M,O!Xa~YB ɻ!@J$ty#&i 5ܘ=ЂK]IIɻ]rwbXh)g''H_`!GKF5/O]Zڢ>:O񨡺ePӋ&56zGnL!?lJJYq=Wo/"IyQ4\:y|6h6dQX0>HTG5QOuxMe 1׶/5άRIo>a~W;D=;y|AAY'"葋_d$Ə{(he NSfX1982TH#D֪v3l"<, { Tms'oI&'Adp]{1DL^5"Ϧޙ`F}W5XDV7V5EE9esYYfiMOV i/ f>3VQ 7,oTW⇊AqO:rƭĘ DuZ^ To3dEN/} fI+?|Uz5SUZa{P,97óI,Q{eNFV+(hʺb ״ʻʞX6ýcsT z`q 0C?41- _n^ylSO2|'W'BOTLl-9Ja [$3BV2DC4l!TO C*Mrii1f5 JA *#jv߿Imy%u LOL8c3ilLJ!Ip,2(( *%KGj   %*e5-wFp"a~fzqu6tY,d,`!qIv꜒"T[1!I!NwL}\|}.b3oXR\(L _nJB/_xY.# ſԸv}9U}'/o uSH<:˷tGLS0l/LKcQ.os2% t)Eh~2p cL1%'4-1a_`[Zz㧦|k˭c ĚOρ_} Ewt3th?tvͪ{~;J0= |JUԍ;Iw}/9nh7l%>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 dI [@3YNfAMpIrv̡}XI{B%ZԎuHvhd`Η|ʣ)-iaE';_j{(8xPA*1bv^JLj&DY3#-1*I+g8a@(*%kX{ Z;#es=oi_)qb㼃{buU?zT u]68 QeC Hl @R SFZuU&uRz[2(A1ZK(O5dc}QQufCdX($0j(HX_$GZaPo|P5q @3ǟ6 mR!c/24مQNֆ^n,hU֝cfT :):[gCa?\&IpW$8!+Uph*/ o/{")qq҈78݇hA sTB*F$6 2C` |ɧJ~iM cO;m#NV?d?TCg5otޔC1s`u.EkB6ga׬9J2&vV,./ӐoQJ*Dw*^sCeyWtɖ9F.[-cʚmD (QMW`zP~n"U'8%kEq*Lr;TY *BCCpJhxUpܺDoGdlaQ&8#v| (~~yZ-VW"T- 0@4ޙ-did˥]5]5᪩QJlyIPEQZȰ<'$VO"d.wEр%}5zWˬQOS)ZbF p$^(2JцQImuzhpyXڈ2ͤh}/[g1ieQ*-=hiך5J))?' c9*%WyΈ W\Of[=߰+ednU$YD',jߎW&7DXǜߍG`DbE#0Y4&|޻xѷ\;_Z^sнM\&+1gWo'Y;l>V ̍"ޛ4tO,{=hFѓ$b =D(zn;Y<1x~SJ^{vn 9 j1шk'L"cE=K]A(oQ۲6+ktwLzG,87^ 9H\yqū1)\(v8pHA"ΈGVp"c ?Z)hm.2;sl$瓴ӘIe~H|.Y#C^SJĽHǀeTwvy"v܅ ]?22R.lQPa ˆSܫ1z.x62%z].`Gn&*7bd+, Z`ͲH-nမ^WbPFtOfD]c9\w+ea~~{;Vm >|WAޭi`HbIãE{%&4]Iw Wjoru ݜmKnZ<X; ۢ( nx K8.|DXb +*598;w)zp:̊~;͞)6vnM!N5Cu!8Wq/`FUwWAֻ,Qu W@ Fi:K [Av*_958]a:pmQ&'ᚡmi@ zF(n&P;)_]µ!doR0`pl`~9Fk[ٺ+4Hhao-jϸ??R<lb#P-^39T|L /~p│x@Bq"M/lja\b݋af LnU*P(8W[U6WX ZoѶ^SH:K:%Qvl\b FqQI.ȨHWo;Nw$͹O$oEE-eq=.*Dp,V;(bgJ!gF)892sw*+{[or@x,))[o新#.͞.;=fc<)((b۲Eumw峛M2,V[cm,S~ AF~.2v?JNt=O7^r.@DEuU1}g$>8ac#sĢB\PIPfwJQJ;Qxm &GBf\ZA$Ba-z|A-I @x70 晪MV)m8[6-Te@`E|=U D(C{oVa*H7MQK"<O%MTTtx袥:2JޚݶKd7UZihRk71VDqiގ\<:Ѓ3"gJJčE&>&EI|I˿j2ǯɘCGOa9C1L ={fm&'^tigk$DA' elW@Tiv{ !]oBLKJO*t*\n-iȚ4`{x_z;j3Xh ׄ?xt.o:`x^d~0u$ v48 0_ | E"Hd"H`A0&dY3 ً[fctWF_hdxMUY.b=eaI3Z=᢬-'~DWc;j FRrI5%N/K;Dk rCbm7чsSW_8g{RY.~XfEߪg:smBi1 YBX4),[c^54Sg(s$sN' 88`wC3TE+A\.ԍל9 y{͝BxG&JS meT;{З>'[LR"w F05N<&AJ3DA0ʄ4(zTUWDdE3̻l^-Xw3Fɀ{B-~.h+U8 i1b8wؖ#~zQ`/L 9#Pu/<4A L<KL U(Ee'sCcq !Ȥ4΍ +aM(VldX ][T !Ȱ|HN~6y,⒊)$e{)SR#kהyϛ7^i58f4PmB8 Y{qeφvk73:1@ƛ.{f8IGv*1藺yx27M=>+VnG;\<PvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{E1kٌS*#¦۵_Vu3ЩpRIDr/TxF8g4sѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓI?{WƱPz;| \;_D[T/BI GH8@"t*"9<H(^jJ=䄸-m!AdEږG)շj#v;#y/hbv BO Iߒ {I7!UՆGIl HƗbd#HAF:iI }+2kK:Sov3b:1)'A6@\2X#Ih9N ̢t-mfeF;gUаQ/ .D%ES*;OLRX[vDb:7a}YF30H #iSpʳ]'_'ĕ -׉6tfЮ$zͪO_sYq+q艻*vzh5~Yy;,DiYTP;o./~^.6+zZFD& m@WXe{sa 2tc^XS?irG#^ŲDI'H_Ȯ;RJ&GT.Kwj;of¬zHmmS2ҒN'=zAΈ\b*K ڤUy""&D@iS=3&N+ǵtX^7ǩX"CA⥎å+4@{D/-:u5I꾧fY iʱ= %lHsd6+H~ Δ,&颒$tSL{yєYa$ H>t~q؈xRmkscXQG~gD20zQ*%iQI$!h/Vo^:y1(t˥C"*FFDEMAƚh $ /ɓzwG1Ƙl"oN:*xmS}V<"dH,^)?CpҒ7UΊ,*n.֙J߾?Ϲhӷƀc"@9Fў-Zm1_tH[A$lVE%BDI yȒv $FO[axr Y#%b Hw)j4&hCU_8xS] _N_Z6KhwefӞ@蹃DROo X"%q7<# '9l%w:9^1ee-EKQ'<1=iUNiAp(-I*#iq&CpB.$lٴާt!jU_L~Tb_,֪r>8P_䅱lw1ù=LAЦz38ckʖYz ~kQRL Q rGQ/ȆMC)vg1Xa!&'0Dp\~^=7jv "8O AfI; P|ޓܜ 8qܦzl5tw@,Mڴg$%82h7էoaz32h>`XT>%)pQ}Tgĸ6Coɲ=8f`KݜȆqDDbZ:B#O^?tNGw\Q.pPO @:Cg9dTcxRk&%])ў}VLN]Nbjgg`d]LGϸ.yҵUCL(us6*>B 2K^ sBciۨvtl:J;quӋkKϮ듃ԁ6Y.0O۾'8V%1M@)uIw].5km~Ҷ綝R(mtV3rșjmjJItHڒz>6nOj5~IJ|~!yKڮ2 h 3x}~ے4WYr9Ts] AA$ұ}21;qbUwRK #}u'tLi'^Y&,mCM)eu㠥Ѻ\a}1:V1zMzT}R,IA e<%!vĉq|?mtB|A ?dXuWLGml?*uTC̶V`FVY>ECmDnG+UaKtȃbeb筃kݴO~f^⊈ 8MK?:mM;ߵoz+O~e3݌ƺ(ܸf)*gCQE*pp^~x܃`U'A~E90t~8-2S󹞙nk56s&"mgVKA: X>7QQ-CDC'| #]Y1E-$nP4N0#C'dvܸȯ.vIH"ŐR ;@~y>Kv{) 9AG ćͩ$.!б~N8i"1KФ\L7/,U@.ڮO?mُa ې!rGHw@56DǑq LA!&mYJ*ixz2*{_;IYJXFfQ* 0kA".mݡ"3`Rd1_u6d逖`7xGMf}k/⨼0Κ_pLq7k!dT x삖A7 u/~&ӄMu.<|yi I?@)XJ7{ޱ?Q]{#\4ZfR-dVaz./f+yGNMGOK?2_~3\z=y}^G$*A! IcuR.o=MZ9zu b#s9@*иrI@*qQN||Ix;I}&ݢ6ɢ}{]x}_o>Mm8S]~(EX_uM Wi·yT"^'~i6֬:v~m! a[s׭dֲcUh=Ɩ9b&2} -/f;M.~dhÓ5¨LIa6PnzɗBQiG'CXt!*<0U-(qc;}*CiKe@p&Em&x!i6ٱ˭K& FCfJ9%ٕQ·BD-]R1#]TROr}S [;Zcq6xMY 6seAU9c>Xf~TTX)QӅtӚe~=WtX-sJb?U'3X7J4l+Cj%LPFxŰAVG Y%.9Vnd8? ǫjU3k%E)OD:"Ϳ%E)=}l/'O"Q_4ILAٍKK7'lWQVm0c:%UEhZ].1lcazn2ͦ_DQP/2 re%_bR~r9_7*vrv |S.Z!rV%¢EN$i^B^rX؆ z1ǡXtiK`uk&LO./!Z&p:ˏ!_B{{s1>"=b'K=}|+: :8au"N@#=Ugzy]sTv||Aec Xi.gL'—Ʃb4AUqػ< &}BIrwZ\"t%>6ES5oaPqobb,v 2w s1,jX4W->L!NUy*Gݓ KmmlTbc[O`uxOp  |T!|ik3cL_ AvG i\fs$<;uI\XAV{ˍlJsŅjЙNhwfG8>Vڇg18 O3E*dt:|X`Z)|z&V*"9U_R=Wd<)tc(߯)Y]g5>.1C( .K3g&_P9&`|8|Ldl?6o AMҪ1EzyNAtRuxyn\]q_ߍ&zk.)Eu{_rjuWݚ;*6mMq!R{QWR=oVbmyanUn.Uqsy.?W8 r[zW*8nؿ[;vmcoW]"U;gm>?Z֒Z6`!2XY]-Zcp˿˘ɲ}MV<в~!?YXV+lx)RRfb-I7p)3XɯEr^,bfbKJ'@hX><[@ ,&,]$*բk-Yv5 '1T9!(*t 0'b@񲱥-kc6VnR0h& 0Z|ђ8 CGV[4xIIWN?Yt>lf@ Vi`D~ڇŁQLLkY <ZPKoma_u` !>Z;3F\dEB n+0Z ?&s{ 6(E|<ޭLk1Yn(F!%sx]>CTl9"و5 |ݹր|/#.w0ޒx"khD?O`-9C| &8֨O8VH5uH)28 Ǿ-R9~ +#e;U6]aD6Xzqd5y n';)VKL]O@b OIAG Lmc 2;\d˽$Mu>WmCEQuabAJ;`uy-u.M>9VsWٔo RS`S#m8k;(WAXq 8@+S@+' 8U˜z+ZU;=eTtX->9U-q .AV/|\ǔ%&$]1YINJ2]:a0OWvI.O6xMY0/M$ *s5x{gsəL3{$)ՆbG(}1wt!wVf;I&Xi43غgR 6 ݩJ$)}Ta@ nS*X#r#v6*;WJ-_@q.+?DK១btMp1 1Gȩ f,M`,Lr6E} m"8_SK$_#O;V 7=xLOu-ȹ2NKLjp*: 'SasyrFrcC0 ѱ LKV:U} -:U8t[=EAV$=i[mhm"roe5jqf$i>;V0eOޞ4ccc2J1TN.7q;"sդSP) 0v3-)-ٕAg"pZ: "ka+n!e߮lɹL V3Os\ဝ+A= 2䣔AzG\ ` \vc"Kj61O Px"3Pc /' PW*3GX liWv-6W&)cX |]O;C%8@*Z1%8Gk@5^NtY"Fbi8D'+_1&1 7U^k6v읨gQ`LRx+I&s5Www` q:cdʰ H`X;"}B=-/M~C>''1R[sdJm RD3Q{)bJatdq>*Ct/GǍ-`2:u)"\**dPdvc& HwMlF@a5`+F>ΰ-q>0*s%Q)L>$ćYV\dsEGز/:ٕycZtO 2ze31cDB/eWy!A/V4cbpWaPBIpqS<(lȣ'3K?e Z?ڠ8VSZM}pnqL f2D?mzq*a[~;DY〩b𻾋-]f8dBմVs6傊zF"daeY(R+q%sor|.v\sfa:TX%;3Xl= \k>kqBbB;t@/Cԍ)Ga[ r=nl-w/38ѮI*/=2!j\FW+[3=`BZWX Zd>t*Uǖ\*Fu6Y3[yBPj|LcwaIuR;uݷ㺾|47ߍeys=.EinE% 1zY\+͕߬VͭW_겼cazyU1wOw)Ǽn@6 |lk'Z|VZpsqL5 څB}>u)^v~,󿴝} 3+m𢛲Pz_Sp2auQAP*tLnIXA6L7 8UgKdT)*7>p{Pgi-b)>U6IXabPde Ӽ8Ģ8GɄnb'G ֤Mcv4?>HC78NE@UMc8>`TvZ:}wm_Rgg2sҴ͜mvgN")[7zU'ٿi:ڸq{lG @Hbo$21&NT18w?P"0l~`_mV$<ߝ[U g7B H`_" xGEWr*mߔci쫢T2r Y$ZGQ2lʪ^]7kSYIr< '}UU8)x{o-Ń2WGMOKq<^2=1t0.z? =΄_S>.>9J}@<8/2,o;<%e#FwtJFAyQ// ,nYK+`O, ^rSGŕcyreGnN_WԱQeq͈XAcmV9 7C̈́K]/vĸNֈ B/q⺽X1'kjɗsE+N/t}CF1h h`~ቆ+xhw'Yd2|AוcW7Wsɨi@ky$$V$ %7u2O:d^ǧG^ǧe aJ39-arb.3 怳s7| ;)+&s0с7x/ s'x`{?Ĺ0/L .3`]hd1/jE`U |]oPK &{"kҟ=+$9H/+ro:b48C%.=]x)2~wB5\%9R2lTO.(aB:xt.hVEzrGwEΫIͪj>%x%Qg]a/yEu2r³řjT񓥣hM"N є_BT~YfJ10+oUyTT~xg$JL9 nĭw鿺iFgU vO=P~ŀŀ_P=S6 -#xtj2)HbQGhg ?<>WTٞd{c,|]9OIJ<#S'Ȩ]V4giOy!bTBju*T_s9ws^rIb O\HC!o^}g18= fY|݊ ?x]ӣGk@F_X >:RA%9F+Fa  ߗySgPNn`NilJñ/]~GmW$೟p6~O;VaS܂ j &cXhob5^Gݯ3\][os{o6X%KL[wv{w|~)R|Uޔw8ّd|}{(w{ f,@PY٤qƝׁC2)NG:bosHXa8_dz&<N7( M'=?՝KޔބgtZSV,oY b2/$;Yۻ6GML-@AAysUa|,`(P.V}uA!QY4N؏枽}y$jNG3Ё P(GI-8" zV6EL = "zҤDNΪK\ K+0TEɤdӉE,PP_cՈ$Y"g84f]eE\on[]8DG{*` ,˨B 6"ZS8!!M/3u;g31p~rrM)r)n)spt]T;<ymoOx4> 8@! Z7V9`S FaU0OW7Pms(<`g!\u)C9GyɥAlfy@PyM8Gͱ5>+:ljĊPZ(IJ;H;m >{$dĚ>D^ {X!1利ı[}; S,tS*ܔҨKUq^;:3:ކ0T,BcQ.60om tVZNVOK Ƴ=sTzGX"'}hn ªn+VJe#%C)a=LՉS9\0R~md|j4γz䥔bCs5WkUџiki\F1e,'+2kcq&n,ⵧxlF\V}Aj& T6jBr.H&\*Ulsb>ɫX3dHuz j&}POrյDfu5Qqu G89꬐Q`à iukhFI |˜c`XUqה:vخ$Q.$"qGwo3-ݧ!I8xaUqxОMй@wrU ,CRݤ0:w徊[8=t);J%Вt(Dacκ.=$֤3<&k94EJt)^Üz0!EsHL3}%Ley{{H4~ջi 6gx0FN'˞$%Fĭg=K<Փue躜f*:/$ ~vVF|%_\-Ӓ8LZi gJV]sc1۸{F6ֺNoo(>mmĆC-h4?Z~?z]l3;3Z(1Ecw:%ݕP*Ph=RLn?[JpH=kRJ|k&еG17m?4CJmg7a_FzJ]{zShaPӝsBi@v/χ%pVAmفyFl6bbZN4˧k˞uu0L1vOO1#v ;V]ӻd D|!Nh2w5[(k0ԜbgiLjje[\uxVc6M@Ш>a2-)LkշeZo8)53&Ql|1CWݣFr-Jt8+QYUO"pLjIk/+zD-C@w9dWXwJŸ'}z\͏G?jY@ gb`h`P &^PX vD(:-0NI邞nÀ(:`7pt6ӮM(:)2u P H˺l<ޘӟva`jS'H/-a{p^(V:X LкAfA@;ALl5 /y? j< .n(Lk0k18$X-69A"v×IvܞGЕ^8 .i5(0#;bp O`kՌPlSf_@d|Izvs_JLXkou`!Qfm쬣56yrtj :r<fN(l 9pX ֹfe>32h= D^P \-mf(ʖz9<{AmK__K" 䡁 :g0Rȕ+JO_U0zjG9a媀L  OwRjG8hEpXJE+5mI`/!Wdk0)yFM/֥*I(/=EYAFE>!&*l8Rf_^,{+Rtb 7nA,i2j\I韛⹕ [VWLS,]ыf[HApn@3~<͹&> n b{Q6N#^-K(gӴ*ﱀ vH>wq;X6=Okp&iDUAB 2p;9@o;-9:$?6lDd"PEz`/ Ga8H`IrF/# OѹمՆ;߬cvt=#q9^G&!o"iQX{M@k ubD̮F߰:N"3tIM9mpWz j)+u=p0` z?WMm |QKzncUU} I3">ph}_+d!#jKާL+ރe^}Сz:lň@F G@xyzsWwkjKMs)+:<ų yI\HN ͜B\U/XU!%rX=u+wJDzrqqtznEQvy[*gbWȯ' hX?/!%l ?9J`0F% "cmBiX ƪ"Qדث]:16bla嫵J=VkՅ)y?͛uL* &t]v`q;BGX<җkGKnLTKů4ɋ8Óbro%>:hx+ώ)ܧt A p8(7//3yt_;U f3oN[%Wp*èJO"+Cè;+[(_5 ]ęOYzԏm{ĜDC#/Ncr.74{^, kO؈L4AcGg9l;h`wok R:ETO)ZUOԦ%̇U*&BQ)*'(C՛-FlyOi -f\OPxz+>fA鑇n+lwxM)r{$sA;3V !\wzti٧ Ps7 S7z_N8 WTM@#YDbf{61v#r_pO)!j ^z0IP"PaH/nYEG] lqB`xղ,bV{APXZO0ۋarigiȣзCWKہZm,'1|ci( DGkFᐅnr7,G)@Mf8>? 収U~S*fO+KcPOL>D{mIsꭜVꈮx dRx,-x ѥޤAA{PPަOA8~I`m@ۦTP@Ai1IEPwOBV v ۝Piz{@JO#ߓPBUB v'4x]%9PBWj2oDz9gUs(3~N˻l8MV f|1'USEH18bDiGeZ(4#5~)}캾_Cҿ ןR6vӞͽCK绗Qk(Ň hN "+SK6]IQ9>ړ)zPq(yӠh0?O'G@OF5bhvgci`hErB`V?* yh4WpO\LyZd9zm\KzjS(LEn9.Z1V=Sz";Xoemt:GBaЈ{VS4-W$+dAxH,-&W ߠo^YU.#IRY( @gq:axӂJ|S{||9ɿTU ù%eC~lJQmlO_gjrt-0USXY* jeIϫ::z3E8Tx;k'?Vb<^o ij)@{:E4 x%zT4|'u>jcͨgRQHq\\ ,?^GSxk4^Hd V-C-+8Y4WI6=KqMT?^i04;)d/ j 'rA5]|~ef<TmzSb"7dmY_+'o`MvVj8CNLWֻ5HG?i:fEyr%/$ͦlM>n6&xp\{?z~Vq&眫%W^JLW(Of0v5ʖCWPcJ%d. vM'W{e!Y4>x轪P׬FX-i㺏P^e+e'UB& ӁuÆk!pƬZƒTz8hQ!% 7K |"5JXzFe-sշ7SM/*ìQ\N.DSHM9) Pop0J(X.>ʷrNLxtiHFaf6"_7_Ś]:';i%?<8NSE~"ɎOk<>ƃEsS~˜L89wۦX?O/1/ak9۞(k9祰8Wҁ>.|ew˻iY;䩩t<eRUgKZ땒k% wzv3͓(A=կK>Ԅ.ysƟա[˸ A=0eREH,GS/ 8f M&xQ$ L@$;FYe j\ n⮧JcZ6nc`c7]a JHg"b']'9o#Hm i:*x=0e& [ImT$f%b3,RۓhfI|.r4g$-j'ձ3ZjJ٨i hMCq66 =bS 9Rn p0`N<, `$-,Ծՙ jG:MdE*u5]Z߳Q2\JboZY'g㟛a%Ҁ). |"x)u:fI4'CT!KGʖ6?˄I-˥RX C' mfɁfwC>OzM4pަZۥKaWDsEtԼ*o/GA-9_*yz .*[JIS}dGzmM_!B*vL_z'˰}C#%K\CKn:rU2u@`Fj,OZe#B'OmJx zu((|!]Y&|.A'!c+14âG6a ZUZ+Ȩ}]fz/zwX! .厍j3iqL i:RlQ;Va>nUեH1h1?]WlsMn[W/4?U"m.t6鈵n #{OJ6d͗= a<rȬrr8 "zrڤSqfE-t@0L7@:%]|E@H(mm|Ǧ|!Yɦ][֐,$**}VJn u0jI33mLUJ Y*QiCצYt¤q?XZ2n\ BzK|R'kΈ4ix+L =Εhbiy\ I><-2eL{Ñh4GqV;kXybG}q|\q+R]rאYW;JڙOu몯Yy0}ЙVlw0S[z_ Hor#u8$#ׄU5ujχ{ 㯟mYc? yD?jS>R+ٮ:I}6_q͏Woew7~ã\g'?xb$?~ӍῺ ӫHd}g Rg+? /}棸F ❸ޣK__&=v2ڋz%4 Ȝ~Yڻz>Af|rOc߾).9b/pRl(Mk)M,VLcwߗ*;$w,HbG>''ʾۛ^&&ViW8g(cVk%kI;YYشwY"70/Q8`8Xk+3P#S)h e.b$biGa(<ĪI͒щYQc Li8]HbQ .JrÒ`bCVZ %6rM[6!ÈaOqW#v.# g1e?}x*iΤl4p&ba}Is3I'nnCcqlU{BCyO5HYS2z6{JnȌmo+x: "T\QaN9F BlK5[=3P12; 0=g"A\4~(|,x"5Wr̘0ZÒŠˆj6b$do r,ǧkd8bM{c*JDÜf#CB3z◲ڱijwUIro<8ƽL2R%)xl1@CH`p4^C[e^FÔtb$Kx& `Gs^&ͬԥd[hLdbdp8b$M`GJ[HeeI &/q#pw?k6 Ѥb2h,UנLs00X'`7ؙH΂Nק2;I'gVQ XᠹbMK oo+(\#4je3*fZ`jte"d5U=H"4֌"JxѾP%mXUs]"1BҩCBb cD.o 7?Mj! 6@DH:Y{Gҋ2o8{?_C*j%-!툪aS_Dh{KRVy]XVBphe .../+c~S@TL[o;j)VR%u`B.KI'JweB'U*.zR#R.01x& g bB|Uɂ4ΔR TE1CdZC?[e"1{QbI,8LHbO U+w$(6}*!S<)ߝ(LyQL44#N1N64h:8pDupIHptnQ8< q*X ̢jP~ F 96Q9fU!lT=k%A*!@Nk[_o= F.kvkj^mN,hk)Qʶf0 1zm5UF#Q@\5D!kBYlI%o6"$PʱJ48X 7^Ĕ6n F>j6B9`7jm81R "@b RV"{1NB"QEe31Y8tTZ6-9z12n(p o8O3IBlMƅYXհhPAe  *ID]3>o?wiւfzL3j;ܨa7[ 5baB`&sƝ"&{9 n)(K Mvo{YƬ/OkdAHbu:{vUHA wzQ Hozb@zk1Kp2g5*&n(c-]H86mQ N:&q$S?#̲.Fl!ntu+|4TF ^-oLJVH>4ɔ!a^L-`P%$vecQEM;5\-fYrsr2T =ϝyZcffjG `|.M Ժ/1ʍ=ȩHɬ%AmLƪ >0XI7VRWә_H521CŒLDȝ k؃/tqhLO?y, l |Q%`  C གдRIx$IUcrpn,D{GR { wN1H!,^9-΄Z,wfNb8[BrvgC9":|a[?=>cs>S[|%qpn0ry.Z/yБ.v|lxٹ腮*uRl ŪCn;U)lt( oޤ2A1N9vfIp4|Q$JKwB<3HFX19z plhf 1?' "jp,@$6.yl.6`.n)VC(lumQ2 #;OABoTږ9OfcG ̋|_dW++)Q|r JG$ V2.bH<Ȩ"4e-Ǿ=(ꐒd맋DLe節@eZBbBZݩ́4BI_\y"Ptx;ƨQޒlJE1;L3b*'OLb#x0-nK~Upj9@ꧡmZ.k:j4i:ǻ=㦾IZ3gR[,+X5FXy 3wѨu< E_A]r3X (%]~b]5ìag Q\ƉMKH)g$̚ttN'%Zc)2ZpTLu#}jSLb'#JN Lu\K58xRCrsg&o吆X!VE?$cF)eΑoH\jL5t`ru/Qzn)H@4`D* E!('>݄!Cl$Di7Q ;v7 &I_u%Bq_r=ěx՗JԌhJx!woaK͹WoFuLb{s}TB Oŋ:,JgW|g) $ZC>ak3 ̕ E&8#i nNiBK[xꞮHpMrx&o,gǧZǴոBA<|.,}0)(/w? Qg2O,$o'I_Z"qp@; >&,kI`\Tl[oMI~Q7wn:Ţ&zނPdvڴ"xX2,REI%TI0O$8n;w G6Ub"\%5xJ^dtcl!BUZFsaK,d^HLRzٲZ҈P:.i`NV^J\:w ؕ"̘Us#vn <3z'6g>ƤZ18~Z0jEwNlљ]*ls=>a'B/kFi:<ܒDxg OT6Q}ԞDV ΜQbvN`Đ1wNKf}0Yc7x* Cjqw7MSf~뒘U ^#'~@߸%-SF\! O=~X^m? vWjA)Ij|My݁7$vOكނ&~̓i鞻罈Av`{*^<EU MFʢ(MBtkUUuWj>=~ѺsN\k\(XcX* PTk,m3w⌌& v`پkh%P^~?mb1oߓR^_=qq! }'p%`Ga~.L`:Djl2e'Q<5A>&נ$ G9Yu'k!d6ᯇZcLNy6boLGߋ5Wߵ75H%e?gpk_ *\v\Qe6Fzy:5:ﷃl~oxgP̩fO G\pD씋d4'0dhۋϬEJ!sD 5%XVt8@'_UVJqgwR>G4[% 99fsp\Ǟnnbëp;W+ ;F?2z@Ȯ {@-țIre>>LnUrӛmMruKO8>aOii `^2?TDOOv@$.RM\ uߌǽGs5܌{ђ(=C.NeO0F(Oʾ?Du69}:d8Y+N/qNI:(zE-?#0jp\ H?" X/K6b&pBX,O hphm?1T2WQSi-FNGh*c sdl?Fc'_!^]C^,e1%;NwX96z,QUܳIղI"\?bM[ > e[g/wY~nZ_#7831 f߅Ky/:`]j2lC [ܺ\ػ%>X]#a*Gǎb \"1oof@o$t65Dm; ">9 \=7d4:Z S; `@.ʚ|n.s2B dߊK|uo$Zq& hȻBp %w n"K-ٮwżCsUTNK,XRi˔DY~n!ڔbt\'rM^k]Afw;ۻ+WcUiQڻ,RH?~zCꖰ>I8uǾa#E3 X"ښY$w54%r!L%bZcZ(m ӂU0(18ϜGf:^=IB {>*rg54_P܂5dV+~~cѲC`mS$}`%#})C=җsZ!g9ڏE?#mJ* v'ø=A0[ n7.A"6Cnnݠ+{6hDp$UBRZ$-$-×!&n+*GfHZ!j[0n!"4~)D&n T4$b.§/XAkMfg6_%_~?XRTGlR@isQX)@X_aPzw{GXk9N0׫̋>Y!t]Ep|G|ayFHAov`,ܾr]*Wx0 5JM3+>vo? M_=vqWN,@$".' 껋~ᢹGUB ^.{o4o)N?)^rN@;gtDa[AwM.Li̬j9`ZE&vyBjj+oQJ,[U<8 J N(n`) +ý:,ܡXzd&_S ٴXZbiO: .!V#M3|>gK,Cmp'!uvkbxX`5O9PNt l;Oh!K6mh:ZrKH *dp, k1 Z~\= f%Fk2CcpsWpx{;Fzd1ԡlB|b>)ckf[yu-ŕ:aKzl|cd*Ѐ61#5UZD#Ͼ+O)6^tnCXQ2u~IF(lIѴf*-ϚAK53a۠<1"^cB盿` y,V{ | gk Qݼ۩x l;|_G"j_CV&dO>T0E.*hELD /\Z#2*Gytz:3Wy\b[p$XԢ`ўC"5qF`cbr=g.YR ŝ &>y@՛n'%Ql}2w}Aٵ?o]g37@3TC{@- 1WC|#"`7wJ9<${ [Cݯ߿=e`o14 bBqv7b+q?2K]LbHMȤɣG7#X;JmFwin^_aO;p7K~C p\lo=i#wZ5LcQgaj>c"D Ѵ ˛SWv+ŪqIe'_r)(X$Q`&708ֶsΎː˱!Xj0Y 'P2|Lݜ"0&ϛ(IqB]D%8!7~f.] @‰&JF孇M)aꞹUIu;? 7ɷ?^cR 3YP}9-Q3|T*;sDPTU~-Pm7^DsUu4+k;^k֭-k= KR`v q%"zu㭛drXBfr<\72BQ@FE;6} ] ȖƕzQrDwS4a4@(3*[rޖX_VFӎ5' 1#[zU'P%\\TY"** M/ї+hkZL_bq)&+$a4K Mnf{1S7b#^RZ U>./X`D6JU9IQ6xwHa# $ק?f[ܔVWol>Vڏw>fD-/|{~:p19>NB*4kC"d[Vsa>'i)S<v:y33|ZېLyVԞj@IG7#X6fބZ}*塝JFcYeḐ5VZVzjsD-[Iݢ/Yzj {cSAiʸR;J RgmasdaiR8:Rv'mIpf =As JME"`&pb$ Ã`Y/S=F5_IrkPT/A,$X 7x%5|\.U#1[;ȉM|f15Rx7jïVZRb ^bn  rgݏgD_0/~fiaNGtCҝ/r#yxgχ')w+'#|}am!,x}_L |.::##97:R $@דJKts WHj WsuO|݈ŒOB;yހ-c'?z9qv{FXOkfz|轒37ǜX"`,cu\"k*.D|]cR_ l[<pȊT~uSƕqXF] S1fC  ,@+'zO7sݤ`2CiMb)ofmD1grXx!x F.FD .T2))=!h!K #28KjHFZӄ's8De)2"Sn0L1ѓ"l8Q4H:LS`[_ LH api !h9,a[vY51,6Fu$1Ǿ5 4&8S3uh XD -iuI]% .I$ R)\3Ǎ:&t5]Ɉ/+nCV$_̧!Xj`\ndGk|w<6~M@4r}IToN-[!h5LoEcoy`]v=vFL; \*f&5gt:6*a$[j!)Ne3fZXBg%*! ca1R,ˤf  lm0*1Uce,%0.lk )I[B`XZ<0n{).nBZA:& .I㞼k$ p' jqJUDCf3-|њh)zVr;f+C I.]VC>`A8W}S3=n C AՑJ iy'Zީ9X\\ٕAeyOxYwY.a1IwfZo~UCH)vZDsCZc:E\ڲa527+xʧj~L0mS>I7)Gqt!= n 㩝n]Y . ?ǣ$GH(3?.?kS~xS \ePh{s9~Y% R;+ݟA2F{ 8mrJ b %QIF1%nrJnF%o7:fEj.@K0VYЇ+yrkN<8m?NE N5SBܽ9O{ ЧT'O!s#^_`\ Rd] 9Dsn%Q{̘vH 8ertKB#H5oe"&G;c(n": Ž DNE+@*]Hδ|O7#ْZ n_VeWJ}KV,gfZ>W=0)mպ;WM eu0JlLf]{Aj;!/vp鶿I7 T0X8uK)=CaV]AJxv1q|}16}e.kVELQ91=ݔ0&Eª%  MD K#|xl"%\7==EkΛ߼V1^^[yQ%,yPg; >3Ng(q .xq5Igd+vᕜ,3;'|HE!&n1"ʳ3w^Dija-s"`D}ไ 7Gۗq.٠~po6ɳFcfcJ.)+K:wL8]@.Y[NsHÂ_rM&otrmHa,\IUU wn7yAzz?*Z7xL2Zi9RVCⴴzNa:YծK|1K_ *K>܇+0@h:wa7w:Oyܼ*f̤Nͳk_]8Ksvu}ٟk"`wt]>+ $Kxq6u,,l&I^GI\/*Y &c gq 3PMlh~uzǽƕ h\ΣL8sUvjIDm5`d{Ks+MsA =#*YKL}jDO4a PL 6A<^7_S8ϣ>3q?gf&pFuzMq /^~K͕>r4 (5T'.A|f G^姱`RgO)h&1dil$HemDEaamBXmms~RP(q\xf-fY%Hc~F8FbiU^䩗;mlN֩8cUK%gD)bՄ@EWvϿ̴"sj:N{/;lw[J@2j_iH>RN!Y:(:9P0Hf&E>Qix;0zZ#I[Dr~H*\k_ǂY{ v{orگM.q!:AsQp򂛷@1"p4TMUp8YQWt>qK\9hfSa?噳%]ϣɨ8:Q.oي*.$SUxF11p7"%xSd%*W~8FCW/u*_YR:MpFY~,G&ј羝e˛V:rP^/M1.!>Y'[gfR/_랦1_uv}o~5ݴ&O+>9^̢a^cyv MwՎ}2ft1^v'K/)luzn~=n#Ir0}鼏~]`a`]y-QlbݑTXRJU0z 22w~Q%"R7J4Ȝ^BCu+ %#wV ;kU ew$qEOԲy7WHK^2w>io0.HbbE`gVjœU>'LiS=^u-?XedfV7(J5sy [zJ>8,3D+j V%O%MS15Iug "Q?WԊD1b+s EeE}pz1*Fk-7+T(sf$&E.୷Au\9@@q$G|f?cvvwNZy𥨘(ES2L(Dz&k/Q xa ;  PQb:f ؆l1礵>!'j5xNČNW; 5ZcQ )cvЪVKUxIe ]’RﱆXmV-?z=8졺ϽX={uӃ ަ ݽ[oS__91jӧ?ҥާ?x^tEu}KW_?qس!TQ<Ɂq'g7>*GSVѺgBLJGĕ%REJl$JcSJAr?]x^zۥ8"lh A_& <>Z;G_[-x(yP'N6>}[./Hʱg0g~i|x+ĬVyXf!1| ]HKBV{SE5EX_vdXmw[cw" [uۛC,gz۾iw{t]5tUқ)1cי[w~~bEY~]bSm|gCN ynL3Xl,2{j=}OvS{j}vN{j=}Ov[ m<Ioonn:At蟾xW;f8l;;n[SJ]d7Yóz#?evxOoE!YbDQ =*3bQ"?D m~'4>{g-yoK_okVN"I2-H4qpj;<Ӟ MzhYa9|,˝S7_b~ݍ_ZZzǦ<YPkq['*N̉x+2Z\6E8|..鬹am=Mx< />j X6+:P=w'R%B{-M#q:QBr]hQrWru/ʡ"5$yd.I%)Y@_J<8rX;:q4@{phU<`(HRM)4,>|o*N bh2%#s,t Ya*|,S9kI1NˊcP![KR) ("yJZK) ct^p]Xtpכ:<O,H"$00` NƸ7O,#F`XOhi dw.Mh8i>𪎢!Ⴐӯ"N/m7tX+7丬[4<1Nx]m$d <Z mfLP'd ۯy7PJ1_s $*)!_KTgc_k2M'fq:1-΋'RRXB+mD!>u]>M'7~5oW>}xA$(Xb-Rkc}ф% 3w`'Õ֔<} "v(,fjG^UkP@Bim]S椇Ɖ]Dz\Ogh=J=JQ ni&YZHzhYa9|,˭~5Q1gXbIʧ7A_BAE{hV6UasOc9~7V:&Ch`C' s҅0K։qd9G8,W%FO/ tSe|Oud7gʚ'Z ɡv8DžM^ 6Xus2YtHT֖`>ՌSJBy+r_])fiWqrW9۴b KCi!:sܚX5Icy>rCaK+K<}*A@ DzXϗrą*'6e &hT>[CCFPcYnc{P:t}iD'm܉#Gbv_3abدDj=K-cRconਓ:I)Ã;I yJ\ E#?c>ͦeW_T.<3z EMTK8U. pt+PcYn:e@JFdHN38X]C1'G'"uT7_orrjN!J2]] $[vq,a Tp:+ dp9"iw=M@N@m[}J4Qg=%3wľ<fiWrWNg]o6`f53rMI (+S!*kT%8L*(>3/q19=4Z.ȱ4FWqaU릓zE?Գ+|J]oc*EnQ#Y먝MEKˀ-9kI $GF|pH1)wݢDz &URG{QptN>Bzh<$t_qŀejj uDe&ڠ5>x}Y/ҳ Xt*~/PUzhL/{.Z81g3k̳Fh*َ+*xlCX10?S-ڔNE^`pwUZCU[$ٞR1r D̢n>z¸ګI1eD2@\C֕I$%f>%}:5=4cco$j5j- ^9u@ϰw~@;joZ3င#z'[dL9'hhH9m1lJ9]UX?Qwy˵pFiM{Sw6[G'!vK Ac\eih./"yt.o{)S;aAr1ko;v[vDu\.jj^}}b3 gBKqȜ-qpx|ͣgQe'@U ( Wjm{~=4nUz%Ɲs- Ф";!RjY /;e^zKf:AξrMC@ Bhx>cZ458fES~ :K~M8N 9!y7|Ǫ^|ۂT:Fœ{+um㻢P4}óWqWCE)p (j~y^F/hbQ%M?{b3pd潅S'w9YٍvQ8`2%E("MmٱpXi)cjFZ(, Yi8QQ"% 8XUσ!%hMDzh:\8.(,W{q0~Ӛka @8k.Q7\ U+W6X_kVۛτ^LHM#qu8%tO')v،Xe3 |$ՄWV#P'aiQ8pz;0OliE=Y|/[vCyUy?xb-r-4p+m6S;@dyOHn_a%I9X|^=8WS9Oq\ ≢hvOc_"gRmakWjnߍXc_C_K&vCP CR΄qn2u~rEc=c6$C֨mgTtK_Ǥ3/x0xSjF-[I\jKp\U7 ~ywm>CZv@jph(S'3tZ}_SRKt~#&dxɐuÂ(O4eږ/8n)qJ YQ܊gE Zz6ŝOqO.:Zz5CV8m`G*2U!vtr88!VcD` vO2]|ONj1dMt y/(W82bDQ@1SSXc% &.x QM&ڴs1tk,C.2RJX 2tx^|6 `Mo|K>-83nz2Q[p 5Vߴ\<(GY S8`Bb'%qVaAt&HAR8.cl0)W6x<ͨbg ȹF%,&巟Q߮V.)naH}o94K2`8GrMQ>3z /17)Kua|YPC!\ֿ*ՕXɡ" Cc\*(` ƱeVaL'ŇO S!?D^QU2d$"bq+!Ag"ץ9rZbDYWLQ}f$Y `|nmRt]̙x2Zi1jS ڏ\p! Rn^pd\<Ƕgd=u:`:JAhԹOO]L@k[Re1SЁvk]QNues!4iq(d69+| :ygvWqlb.mlΟ>i]<:̲e92C[|97-:6I '|Fצ V8{4rjSA9w0e|i7H {HDd VKAr˘I҂Y[>,4SB9iay*v-ȯ\\!{+Jv#ة2FUw[0q(JwC _'Bٽ:-6 / no(>|ŷ<5ԭ0O_FѶ/$*wmP6֜h=^u(73=šl&)*ө!Uz!4# 0Vы2u=~ިɋr#=xvƾ-Eat\씫^RtOk>b0m>a\hM"Lm3SY Jb|lqdC0, [ΡO , l:Q븳aVaj-"E14uf+WO~h\ -sQm=WvW+[C* {emqe :G/h EvzpЄ -Nq}c2ƤȾI_'go?L,:y`u'[(aZFȄBJk ^DP^i'<2WXysäIžZ m}(yք룰)VP%,hs<cQL,'7-p51{ٲj̦ ^K.qG֚}s"^oS0VDEDǭf6$ IqI}nDĽ42"-q!0s6SlZ5:/g%ib* *IC~SufCsa[{ UkD2GyƸ NKbX!E)@5coWV]yig+|v[GWe&eTӟkŽe6Ux̖+7V ԟ>mDVVҋ_F%QSAx{ǰ*5K1,EcX60Qd M( $:ePvRZps6Pa{DrfۼLgԟcY[H/?%W(@*Qɉ&@#J9W2,< 滜2"7l J pkܫiMH"2U\~skdkf6@(II1 cq(њK "`RoE2&&*Im"#Į"hhh!S?hZuz@90X腣`\f]9[scEprs~xCN@39a!1WC/_YCy% {#mr#mʜUzHq5try}pSa`1YlBd.kO^=Ky|>d,dx*Cb#]>@GdX* Ed7q Hɥ忎~:gt N6aƬk+0ZIӌXS|l >`tF/ }@SjK 祋]is&#Lz{ ~y|[Л.rF?թK-tQ"1eFo';E>Q'3,IRv3OlJ*7,|oW>>Л4,]]^mOt==y>%Oiro3q(Ld4~sUȽ=:ܪuw G=9T3+^2V(x'qL|]`3YM_rqЛ?.Ɗz}?*hf7X ꂣNeGrg!DZg!"\f !h_e:ԅM`|8}#k /vcod^8x¤dQ2P%BzX.kYyώ@4,Ycl=ݯjm$ MSXZtMTF}Tш+WC'!nF){ T,N^ڼ^8 RŇyn+ "rE|@/ '?Lf8c x/F`ŋ1v\Z̵K ftfYpk{sɨNWzJUajD (rJ[tApKyZ f/<̴%{Pi5fml1g*eBw(s3Rx@`;LQpӭA&fŽmdhB4ʥrN?wj.m)*R6@Uiwv]1Iʝl"aX˴j4 0zgݷ͞W:p++3if!Äs{H0qO} zQC;}c P;.Fg?4z7<dtK% IJp' 0ڛ@|u=&}6üE`trkUw\JE5KCbY_GwG0{%ѩaynldNV5XbC- QX.fQ1YV#j4)Gy?7,Ga;r?Oe>ӲFv0Ïwwj+ݿt㶧`$pyX:Q&U&H C5Ь>ANn~4oݕ p*tOSTN9BPɐ xΈ+\ RҽLW8)'[Ok]3kDp6qPr`;YQTsQ/߅t\"ƝIh M+#buipw7C57Cz'+}1>4UTn48bKYƒgD:_dQϴ6yTnS+{[?@z7뜏)-e°^R2*Wpq<]Cs%Cum b:p1ٵCn^|%j|Dd@I(J,Bv[89ýzE1Xwe>ܕ%kȌ* + jLoi%Ty)d ~bsE|HXJJ=|&B߶:yZm6zy۬JQ.Ӫ 3\lN<*Vqb|ŷs7$|z 0k>8j201?SX']؛iYru29#2{K1yisgrN}[ٙfFAV pi2 -#n J({,V/UF}^'qJsX5'JjjEJFPX Eh0kgr &4eLVˑo^XRL߰Ȱo[JvJߏ4FbY+t%JXMfCMA=X-^/FR”D"\{=ovQ,TvzmMUE6a!ШzvEP~,e d)sg5)IX98ͶJseXּ{;(j- c!+$m!$RCy'Y${Zs>{c%d'K‚, _ThDnjE=sBK۫z0Zlt& 0hNm&U7-R0O53n]z*oG_NƳzPܙ\#cY{Ǒ+Ai ,fvcATJq~E)d[ˉMs]bX,VY Fa7䇒'&o=Tc F0OedHU)4.`[LԺNy ]: E3e S|v@Q [m:.4c#/ Pd]M=qA qf***K`i2*f‰R1(^ }d첄aYdU}Bgg0cZw~ðhvTSRyƵv 8K+eQ.i$]~tDzkdFwnN2Z.W%Jp)*AY-uG.n8-vc9ڒ1G-51WɨeL k@[|MǿշH(RAꉭ$XVM-~E-Z0޵dќcQMSIp;m0e_BS2%>A瑗^w%_ G J":2FHi3*B PEE `Me4öjYP^PIw8 lh Q:hK(U?VqU+& VyO*M ۓ3:ZUh_Zr%c$2ҰbyNd9 "bڂ;ExIo+2~[f̧&EY8QN"\ cHJKP8+yyʎU#޹lWRIm6d&9g4s:BNupMr՗aRtR>a=yc (̓:V MY:{a2F?̕VSlQ𒄊QEI  9ɜV<-49(b+Ws%0{m}d7s.:"ckc4:6U,W~rpqn9iyRǮC~Qn1u#2ub@g?S&cT0*{{rWh ={Q8ʙ: 0I cR9GBf.wIatS٧X\7zHVP9+)GhkȢGS  pq$,f'~I!^#uཧCT![ 3x z1D:ω/RAM%UJ+ˏ('N.ʉU'ǔBg}dQ3z`Cx4EE[31`㻉2k|v :hK(5"ck { ަk4 P۲XZi:෯V  8'.cTF.ϨyNpU^m{лn 0cBdX@'pҖ;QMGa_}K!(V_Wmlw01W!YTM6s M{"&jj/LFWML_CK1DSW8RXЌU\ LDCzNy≮-/"q[bfm'%qk?\~~? =*q6_֓rٴ%+ )_׼YH2zɚRʢ HAqFIR4EqU/j'E 7Ǜ^ n^Ի7m_ӧFR&-5_Ǯ6 $/tP4cg1ߦųL/`xz%KΠ~Dq%B"Mw":Pr {g+DGtE2}`yJϧ}7j/]C\oBcϯظw^0a[ψ4‡īh=c;zԩl '@ (x:dE ob ؖ5P~, pTgl[͟ZWϯh+p?/,Me\ XMFtz}}͘~|s? ҇0DIY6JPJZ`<َ܍/Xqo"E:*rj@_Os/?~^Xv 4$7 χ_h/ ?[62鬫_Vӗ>˸YVr tiR_HC<.VӇ5n\gS_z=k?(og Z5"icv¦Ev^ޑ#X:}kϕ~Yὰ"XG];UFr, B)TLXOC̟&gCbY">vB]'a_y0PLj2,AdnO:^g GVx5Lp:ήjK_^_]m> ty [f:EA?=n{v{+]wVXAu7?>- 3˴_{+G[Cq 1!~z.vq2| ƍx߿Φ+׋!9}atc/.gJzoFSIvVAV߬f?aS;dhW /tNc+q05=0[iHxw躮޾eH{oaA 1L;;˻MwIgHg-X%˘KrNK]\2Iw/I~p Ôe$ZN'ɇ.5c:X񣉚9WƵLum*"`F|1WAR_fm{m.}d1d GKٻ 1A/,tDBFvV:7uTng/`"kc}ʧ6DRT1RJʏ2G<8Pww}~ sesw/|(e2{ ]F"8TNwߖ ta20fR\%|h7O 'U$i9#aR𳅥]1ƽr1)eكY@ݕ0{s![@@?ɒ:R‘+^&![*dkwa}[yo!G-r6B+0-A)c< C*d%2QFT{ivWq\ҢRH:iv@~2dV5nZʁi`!4H},c``iAKR]tu`Ի5?[BXΉ;4dp FTIM^#80iΓ{K ΨUR S:U b.y ZTڜeTDWyGqڧG~ustvac鹰{B%onٻ1u*;p [Lv>C)[2[r.po2?vH?]VE'1$!E)փp<] ˒J&1fyh]0.LNO."2 2v}~߃( x)@z;7LpXog3ڧ-{=IFoGTfIyz߄hbߓk_7Lϋ/{dcS[=)U/Sr7rr`"tsS=<'W @z|t٧CG(&}j˷~}bgcN&FrNN#6L!A_f|y˴5x]޾` A,Rp"+ #A;󬱗}?CvY4Ech讶9HxiH"4ACƔyI{F!n#] &[v6ɶ5%EċeI%53kUa4{sa` ZQeFI/v\zEAӓ;M[s7@5daJy/*X|5Lg,֊1a$9> / "L6iQJ{_,IgY=- c)ˤ7e0„vyu}2E( ODyJWX*Nb45Aj BQdP{ӷ$ȁ "Yε2M5JW`WF&NZG2LF%bh5 +9MBal̿x~xlZ9ĽaIC 5vaU+1$%5H$Ec5Rqxx ])1LRty62]v4,[icvL;0"h.HW;I[d'ML*c;7;p7za@L~켛b1f@M0LΛ)"kkaEFYp9l{>1lFoRgP|!XWBI$"]ǵa[֠`<.;QJk( ND!~rV % ʌ:4I0oQyc" }XO- dm7賤&u '>GB Z1=¢ "4$V>(3ω!0ޔ+FGo!a1 * <ęjQD?/:D&%~( ɐoga1YT4dr h`KOy9eq`J?Cu#ǝ0,L9[k"iQzbR&I\^.N-j'2!eLXnj 4bK0Ss"k^KŎTh]MJ Pw'Wq}U%[7Y,KC]׽~vm^ZalˑҪU2dlp֗b$WM97jzoۃ7Ctf-2ʂcɈjآsgiEFuB&zc_*?KssOW$/pH:O3G+( NC҃yG8: +L9JOJJ{<<~a\4>_Ax~ " )u)uyJu$̈́>5cwXWs]#Q{8Osc*-5JFap24XD/"CiTX|蛶Cr 㘡 LR^7?ѼC `R3i!L'nI^F/ Hēc9$Y%wkѧh: ݰ:I^ 5nj2Y .mt)&}xҸzI {6$VГ8FҹEۆXT$2 -Z@ /vQ(ʹTU{5f0>]_%R( E:8S( g^7ΣdƲWW"08iZ*4ً%>d+Ć)7v/r\֖PW9,lP{U#c~3[*õ vhgHsrLZRbʢqX-;}-ij,07ǁV6k1]v. Uu]yʬ(1rFA0 ( OpWtpڊXtDsXsSU"08ͻ7%u'IsHgܥH?JX ]PBX{Ue-e6vO-c0ZdG?g@F:Rja( NeΝr`cEFYpyI? mnnG95&`nE)t EFaph^IN̞S(خb)]NȘkZU5ccR;ӉAJ^_w>RZdG/T#3Zdd)MF8b8Uxē_R%Nb|$T0$F(cs?rF`I]Vqݜ"084~ݴC~fRDn>C̐Tx`ѓ%AlQ՜lgLdg~X+ ]ϩ)ɼO-B=rl=ja jxsթY[RnwX}"Urd 8$%:W2d zK0eoݑۋ3S -g7_R*֠Pwtuz^}(/ I?Yt0{S [d'C'#1"2Jϙ$[d ԡaJ*OM/]Jt'ܴ( NÅ>$37pny&ڬ 3FĜO=/ڍ JdINaKw0'hqLy Vћa5P5v-E;co DBWfj)"`KV_zbW/v&[|vvr?>rZCSE~PRipWj[z"c.xobnt{9/m78[]WCWUJWk=zI5ܬ~I먥)oWE%kk)R|J:fj|\(|ZllS{vI*jΰY=ȤдM+L7l$[;p3 {y՘|0y _byWM7oe-cm H?5Dҁ3`m 2oO'b\S&o_CSZ-G>.>8c/lu_wotwo}'yx95V+_{&-fV~RD 0[ e2.ç⏠yBڪV=O7EN& f|^W׫޾zeԾy46o76㯧`'`fr6w唀JVL4\ʊZ&&0lY }NlIS텙O'_SS|;ƎAؒf˵b2z [C{9cszg'3P#c%4հg3ؽ6`GwWkq?׏?6uBlSM7 X.Ui3Q#ci`("^7ȏ#<,4 7)|?g:8#jF້!X56_JwcЎ] 壄g/z]Z'>@`.>x|ax8 T"LelG3J7`m~٣jp;!V;hn>@ooA_šo7)l@1>q|\3Pv[{sF9T7eZ06L,Yh /A)2︫FTDSWin滷|X#ٚ4߮I-Ƿ*lQ/7e{? >/}~8f|/cͶ_l]pc5kwٶDxSyl\uضO1bϲa_\R7~{5itoixvtr~^?>{U=Ah k\-w?oZ5 h T)g( ``*9hnb=|1Tpbs[mQGQCb:58&,`.%x n:xzicތm,qɜx0*;ye 5)쓚wI6{| ~M~_3Z#xD8l^&Κs˅r+8xYA܍YL_?,$6O^W#{':X.!>?R~.db1Vd,C R_*/وZ^dO{f->r3V&,IS Y{ϵ{WƑ_>xd8| }|M4IbV)rHn۰ϪꧪY#l06tEj[ Ǧ}Śf aN)͇32ˇ}ʇ|I#"8{C}4+zܦ+SwQi) &A82AU)#D+HXG7Ib&y$ViǬQFYJy>eR5e]i9udMy8b`ΑeNPH;QXF $AC @FSlB  ʩ mB-SU (h2Xb)"^W WKMp ~zbth>4SM| hLd)LVS=!Y8ӈ{te]Ih▆Ec Dlæ@q K6bk-X 946a!̧T<=$TG|6Γ1w>![.|Ψm{sF<9dtu ;nTQw:t:9\i-.6KJỷgs$`L$*NPQ9ZѨ 0 jn <z<_Dܰ/ Bzu0"bD(Ln. (/2'ޚԛUhxUI:,D> |ص^5 H L7jŚWZa߹]R.z:wT+C]?*hip¿mh$2`RC2Ncʠ=v\\(wp)BƌN̝3@cĜE NVF]v\Y7kҵǍInU\}.)443}^^\ ۢ 8]-N ;'x< 4Κ \[OL#Abb∱}-QN)3'T{P3945θOv r=.po8E̅CN2$5yFXe.du0cwge* wD@4 DV.f)`i%L!eAyi>RpzE`3jeD-[F"a9d Z8qN}xB8xnd jAm\DYUs( #)/#JjVX&[d!0 6JNa-bUjyZ ~WI~]*gśQ6uz ϋQrԹH`wb'ŀ\,rJ)'.󜾬"5K`]1@u8Ayazx@ˊIk͈RJŤ.>pcS(uok<\,OVfU%\"|Vt}"Ž)uESC 6N)MWz) x[ERܾ 1hG|<1JR>M+X yrϪo ^I^5e) T"; =P(Eh.SwdFpi_FcX.BWϦ*a`9PiCR;=oqQ:M&}J~'yjިE=9MQ/M{qf+::ݙے= ]A S,o:][f6@@$O|9HjW2:q seA~vGQn\U 6]wc\ bK`AN;"\ NٔV{U"%3iA=9i.4ܻCd6ֲ\ꚦزڔϣ uFET4y_/>.XO~>x jt1>q14;\ǖ.앀~Wz/BM5IXkl4VChfQX^"W6*e'4~h^g3staNk%Zl(jTGR V)`a)H1> ZoS߰cPZo¦(p;o~W?*?W1Qߟ%8^$Őx0@Mm(CnT_6ZSVPߴjU/؂z=zW1{}Ultҳ◫/{Ψf?_ҡx|G+Ou`[$X=b-D4Ə*J h`<{8(!j 4xM=?NKTx3,%"f;tzewh/yݧ[ GC ,)f"FGˌ$"ã`ʻ -Ӂc `GmOoK&h@Q}iarh;|]]-pL3IV:N$D!s@uA('7E;N)n'eLZsc3w;= !E90S Sj.8A^xDUB$dK{ǚoB)GSRYy01r2 +c߰@! 0(;GV HZ'h\ Yb馬"JFc&'4V\lzj>LEALED-  qAH*%Bf\Elb"\=e{@:6 RpvG[fA[KJlI]Ÿ$T6J!^ڠLixVS{=<=~J]zӘ?h4@lb~j6Q98k~-Pm5TcZc7Ma*ѭEW*n]9uyߋ6Wcdfp06.-2 ٞd)XӥLLryPH`ctW{tHbaJ3kݔ')5I׾`f\}.P643}^^dß "`Hl5㈢y:+9Ivy1.T3L9ݣa'~_\{9Rb\Sn̨ ^U廊a `5*4Z2.5VW.DvKn:8i8jT&ŦnL(zIpB矹^7a0GCojڹe7dYY̓dZT^ӢGoq WDc1wTC;W9Iۯ\A%¸7bt>K&# KG:K9IzũR=8ozl+H3oKA#.%RAAbQS;ô%O R&jNJyeYRQeM(&LٜK`F4AR鄡va6{m`hI^7VMˆ:fjcl7"1_(34wTil#9dk\WTn0ύ6[iiB6پiv ok>t憓Z XaMLl&zGeC '4"^'0M*Z.xYzuc"̆s6{ߌb-n倫UˢoS=|vlu.y:]:*mO';wV;`ukvVTiyaUٌM:ջ1 l+'şmIևlqg+,1#Mrpg6 S`N^^Fݖ+5(ŠTVwxeq(C{FUnnjh $ѩOsXձ^JMGů}F O'd͆騈jTNCxiqo)%BJ[U"/{Wܶ /f&z_T(Myv&7.U6)-yjt R %*ċ׳|s28nPRyBsjxq`P=)~ vr,@jγ{_.Wv%_yf/nZdPvMz6V\Q71Yn-9ollt3UIe>̍%ui'ⱺm!+y[}绝WXl1ѥ1^yѥ#jf06.o7xmw^BPW[DӜs޽NCǟQ,ٻK}7P47/at_1߮+ڬڎ]ܥ%=3'za kb f/k [,&lnY}*L.]-F J8zFlR1ece \˾ܗ`^f&ڝ ²EOQs}3 # fZ\ƠrB.A!Uf:3@8(K \v8ÌHH3Zb,#m!W4?,M>ge,tFfq‰}F"{h|:XGf+C=~eTzA0ۻu2^㫬k? Z(,YW~}W&4}F,eo@#y)W 6>]M`{-vQ)3#A([Rf<,_H^?@Gf.vN*u6pǨBluǐf 5|J\emk +#"im\yX'  b3X7C,YF 4'1n*b$ˑ8' \+rßI/[IDc&RrtmډZuLgj}6c[j6u6&A=y*ߓ;OoǭfOfՉ<:5iϋS `v7Bj>h :_WRGC]CZ8W)e;Z8XO3shJ$Ӛx#je Q\yEs8 f6W\:%ssR%v*]nQ{tiyk/O}J <Zr22O#f1>;rS quP@L1Rbճ;^'TBj@GMP7eۻ;)>^g;~?Ooh/J3 Xŀyt6uaUR5~2;@0ߌR.{Lr<1Nx ^^NK>OLfl|us*zJp_:-//D;cb tϖS11Cn&+EpÌ'(DZt4ʞb/XIKnl'|vzHJ›.,Ag@S+3ТJ5f#!lx.0My)r),ˉ8 )$-p9yLez](6 INx=V!Ղ[Y V0!%hVZm1Ӕp 3HŐXMR@s "Л =A$``** BbQ孅4D+Ƙ:pcw\0v`I+|N ୆QvZݦ+Z~,=(ő MT몾e3n[<.\a;C5wQj66n:Y˶T w"6( eAKYP",+rc@KQJ< IF܌nf@]qaf !)1#E"AjQ 8ah'>= ܸreJsbVڅ5 jmu{>jN1@+$3P0c[ 9T;!J$?0O<B>70Rbj,+N+4:n:1RܺLֈ .-P8 ؒz@b0=Pg2"_e(m{6hXzX{|/ť f|~Eop<.4}6s#φ{I/kN2}JkvlXeW @EgjlO.,[%&gTJlqį>ĤLXLlKι8EH -_+Zc`cef8s q>K}P<|~n+VS|)ٵ2}ޣxzDG}goZdA\ejE[݂1MyI"Y,HPc/0(0hp 8b3Pңld^=[-pP\0 37 <\j`BeEJQJ /7IJ6S5"=_BQo:Şo#h)}KyW1ZBJ)Yz [$x^xu%Mk; Ζw*Ô| F1@HOarL\h1;sa/"w}{rM\+v?!z '`T#+u;rm\m$y!/>Fn'Z{bM=]uZvS'O V0bnгUë|pmO7{E`{I6WJ)-{-Gb4tP8gQA=9(-P~f9ݯo/\חo^B$./ϯ"|6?{ ^sþ-Ƃt& zu61YY}o5`+o7r]wp}T,'WY@ V[}v0nWlˆ&W$܈i_(nЏLw -pc5WՒvVbKYd4Ff;zya|{'4G(G oAĀx[ rjhtii4'3}˔E=#5^fn~s'gEkv:z݋)5}b{>UG5]mD?aeLRkƤcj$IM@RkʃW |S*@ @n9њeG gsA#:BcKj2.RZC+A0ϩW\ \,8%rFss /)WLfV bmRY\m- (4a95HP,=cކ-hhi _=4)v[`l `Ƅ*ʥX3zQ^0 gJD2/5߇ SD 0v?Ѓ i]/(M Sq&{{'I`Z`Or ^2H ĥ$BJy \iI{pssāc=.; cgؾK9Mw:ǝS1 -tjFi-!!"C-T^+qX̧]q=xsVQk][]ޡ0x5m(fDFMOE\YAnB[~A >07VJ|Yo{wv$y*ε7mywj|Qzzw˵7Cvj]h~Q>SE{U; UՠﭯzK$ǒ4_5bN1䯄Hfۍ^*M2朊T U;HRT U;HMT IRDAvjAvjAvjAvjAvjAvjAvjA_DʟV;Gb8*b?`yvjn%]n( Al: K;40c72IT!UyHURT!UyH!AC8%UyHST!UyHURT!Uyx^J_噋l?6a&1Jj2."x*Tͭ37AZ i+1k*_ΫVgHB\ca8{rA6,~ڊiRKR U_Cjif$8TwOuuW_]Ϧ/O/ }|қ]&wVuH+.—$D)3:wq 9_dɉ\cMAa1lס\B({ BUw'z1P3.ʌŌDV (bSs qچܳ3 Z!{t\ 6'Lte2lnfMgn]*stNFi/%bLKj[#[X{SPɬA*v{ɈX { dS 2QMtL,=#}^1 CLPN:-jc<ݼĐI+99}cW-7"'`?'zV,9!0ʞˏ^ҾQM>:9DK]oXĬ&$n$rܳhsr_@/`26`UP;$9Yr/4Ջ\`dk5|:W|G+)EǸ.>OC10 .f`rH8+Amߍ0Y8avY[ -׷oI^_ܠIUl_`놵t>Z|LJ ˕:Q~iAnb#v#}SS ?n`k.PHx3jg>sWL#,8ZI}#kjSUGʺG at3e?FuJ&|/m wf^NGݿN̐3|6Mrn~|L& P7jjO5b:ƒ7U*pS6I;^e5\w3_݆?U]S-ilv}}Y*aNCPnTB T"׽ݽ$pmg>$F1;@/nwI{}d?_jǩ׻uיA\h /,Xz".!J{ zՓz0TUT쬢a!{"x>/r 8M(TYjEw9lVJH"Uw$j=a?  M{~CqQa/: fΞLV4} ƹ C$_Z.lG#ZF*!p#Cz̧p==6q{=vmb~N?Ɠ!`UX-zg՘IDk1hFs+%W󩣻bzzbJXɷmuya|oo/x #M%ozz(۴HWZ'5l:I_0^ 5"kG_Ғnt.fmkfDU+\[|>z>Nؾs_l[5[jHûIT@k(X^;˂iIn$NDr?_';ȵgjjN )OQy_Z ,Lvi8zpMw<9Ǣ}W7|aL `^U#5*,ۣ Pgz9|E}.OIo:,2Ɂ4Jo|?+]Eɓ=^p5J#68ROjRz\Tknlf|A΀gx<{qKJTf T0l{ޏȢ쎆Z%}f4SxJ*|1fu㷗|0bQ@Y+냉KM-a$EV 1R>vx~y=]uZݦ5}OV[޺xqb15BkM̨3hRAtZs⊾Vg'p|D>t*Lu>aJ<8K\Y+8}ၱXhT V0b] C]$suxc>#t)yb֬.ORĔXvٕ3GC; (P $$7qh>a:/ypf,KCXqGf4 w$t=o2{Bl0ؗ;a0wϿN\jηkmWC]K+\Z &hʗ~X~TnoJ[6Zک׫',sO{hf6/5t0zrd%|wfU^~l3Tx6Z육޴J蠦j4G/k?-ng.9[  `Rb Sg<F=ޞrJor ,Rg2DO)RR%pCLDGXYh6dCV96ANn}z;)f+#ͿOɧw23/rc nޤtRWr3  A zWa$^ӈy,y>\J}˸?Q(6='r ){o :F9[^e'yu4D{Ql=N; KDl X8gqoR0#4'(xAcH+x`Uei,;J,^7jH\O BrJS/%vɶdx[9f oCrB>z߼0c+aV { sZD)Z0la%Y𙜘+鞘&r +ډW_Ks7ώ隃}<9֢KU'~ȲI6xST+B*f>Pp.*̱b1cbgmI ZrV8Afr3K] _Oa"9?̣sO2˽Cl7>1旆4C%Qઊ @t\ϦI'`&nA~gѨX*| ڶ$%n|PrSETQ}ܪ‚eKm-ʢhTPn~¯dND0&pF_Tkףe=>akvzS#* ĺC(,mem4߹Ga[jǪw ៾N{w1Z.7`AJ <N*7 =L]aS7@KyXcX:T_Fo X9f\nUz[\ly4f8Mr2e̪z0w: YRb^o8nDRHJ%U8luDN/Aar2(鱊)lG@~D"FD¤Ѥ2dl^2N5d8QOCw֦xaT7!`c("@N KRy9Ih"JUatm )˅4[֮kڴ}͐ +9;GJQ&į. 3uL*gLʙ:&SdLˆ@ԅ])^pxE5@GO i`7ͮ$4HkT0ma_+}33c/-XLcfPuǬzsRSkGM+}{cǚsz,2Xcz,Se걼3X˦"S=mY)9@jC|j?* d!"G QH"E,$:* hޅNrOY"$3n OA"XN;cX:D %BX9#lE%A9 3U۠̅`#vKa=vu_UzmhZI/xk@[D5qzIeLJ)<|@>C#R4Ҏғg@O] FAYƕy (UT1PK2J"I,Fu^!=2Gf#3{df쑙=2G(j"OFYZGęk1s-f̵3bZ|rFdŒ2Ea(wI3Ea(x$S+MUZX.oqf<=9`DZ8|NX`|2=#HM.ɭ8%BI1*$ؖ<*v; H`%'ҀEX'|s3ٲ 'SeQ?8s7*?dEX4·4 c>\q/>,rex6J#T.o҅SoJOIj?Lu8A>ae2sQJ0$Oyfx1u)c/66utΕ%Q_  rG3_hoC_9$ufsCOB ;sJz)ظ ,%0]V_9p'|VsYݪcW|]\[݂ UoE:dia=`0ElήR LϦ3Pr7ga(#:mq8B# ggTarAN/wYe-U>,Jn7Q&zRQoxUFij4Qjc~/1|],`Yys["t_rU񳹝]'+m¦*}ƌ`e$q &F~ G1< 6nM"Q >\ IsUFjjŽCdؖL(/kV9}P\h[ 'fQZO jdN<8;Y/E^]DRa]Z2MF.$׶7RײeKoBՃ=og'3_K;7 ޮ'/QB7nP꺂-Kخܕ?J,D`6 _BM3I8gi4NChfX#PJzxK?*s@Osn-+8]74k^9RZZYsI'{Ѥ ',s|6huHjMM~[/߽wo./?\yu^ @ִҋ h꿽q5mM ͧ1,ha5@ly+}`[u\IȔ~~{SSʱܿ݌SMvmY OXj*Wt/}~X6Mie)d!Qm"߯ϫm+wyLz4J2EZ0 ^S?'S&)9qqqf;zΘyy1 2>3,D/aXrǓSjgEp.3 )Lއ0 ɷsz[4wTz=4b{KQ]V֢_1+z$T72uե!B6xјIpXy nC|j' "EsB\4RJ0cx!^3"61p{%yxl[>x5 O7Z86;zofo x }`')ZfT?dL.[Vg A_༙LǬfuQ(`|Wh)EpY a7bOr2Hxt 獳eNNu_BLTg4#O"<1x-e1Iばa8ʧeT|`(d3w2J~xKb0k[}8Y)wML̫M109_{m%k}S X&r34v<v;ka 'z 1")J=˺b=4T;G=Ica\3[rؓƄ8U&&70@KUuok'C*ؙhSƋD3=IcQ^CNBx2Wr8iKف4[|(>)>b2=O}8ՙtq8Ǫ~9hr""Gxx{5(r-c1+Q\S`H=<Ncʠ9_FÜF:9n ZGK!zƌNE"g>OLj9+ 14 4RNllIMZ\wg$lXfz?x~,m|jC-1Acw.WXL',g*tcv Nt^/g!>B_ͷL]7;XmlGk;/}L.G?SLv(g`C6 ލJa8ނ|ur(;vX`M`ߙ"K`C(2~]P D`" z"ڶ|p3;p v&Ӽ6oE tq/~+N/hub $>)3WCl: Bt`ГIGї9m7Dw\Nl޶;W{'ճֳnnmRB, :gsJ\s,<3:ʔǞ)%.SJ MN3G$6zQ2(eZ30{ :FMFSFv O=/MǣASRfʾKQ(3F9*72R 3yʨ(=;!cZ=1Ju^֥U:0&diԏjzˀ]L L9q.Ϋ_Eiltdn;a:6>yaG^ys;=LF-͟n>4?*~pMgݪ4B k 2qrͅ΅RK1{\Z 8Su$>ngazᘭrf+- 8c2e?UN Lx0  N~Ivq:zb0{1&H k.(/ qf%*Ű[)rmq;im?FGukp|mk&GkQjs')&[)Q[M*Ffd j0DVxKQ=w]g)%U_Ol8m~Bn`:e@ե@ȒG J\B:`^ek) Ӂi]IՆ X;rײ_E.qVqLe r] L#ȁe1mp#Kc=}A<FHHjbKFbb$S, [9,aK\ei=%4u-fYIJ ]I/ޅZ~;T&1}sx܎h8 5WㆩU%cOqx2/yчKuXV&k`dJ\e=w #z!Uxd!R&=)#"b1h#2&"}NژO||33xqp>ND6!U{C`Bdo[sf MǍ]1_Y~5q-DyS+R3Eyr&zG}(VyWOHHo='^ugst:8܀14?tY4a#A2rVk/;^sP| *dä{ 3%]ª Lea:rRۡϊ,!\(+g3|re$zݟoby#+Z|,FU'siefcA+jA_+Tg- feB !e`iMMngğP$dEiiO9G+BȖx2J "(Y1ϻ(+K˟M"_s =K4 _Yw)2e]լz7L!N D*/RQDS9~Ky?8/w%e]db@u[Y9g)痓Vcœ?H_8ٸk`ͫأuPl4>7V\%LJz[0o/g.Ի7\)5z8Ke ::[:m[|3v|7Ncpo@djM̨gJ81iC@>8g  0QĘ",W<lIMQ[ut|z*h @hQ9b0XJBDZIٶQ9ۍcM~f8^澩G[&{.KgU{lg-G(4,܎z0 F֛~0n 6y0?-ϑ 3\k\Yd_Lc/X̵Q*g er+gI1lTlPO3]Eq/*jF"s6@v)K$%p$!zd8luDN_ǁ@*'Z]TI`u?dޣ,ך >sv\9 v8cr[À;JVpNءà#MCG+ri~38 1ls`hn)1H lZ߁1$7@u=" ]A< X!. 9* hޅVUܝd8NPbp||!e4OޮcuB+%B@h'+3%Ή\K'v +1hf)Pj;)U  ]r6ξ9}#(U"H [cxo cgmA"kY" _RrFݷ0.̾ fFg]\=,FvPJ+& e8p0'7hxU~P,ooYp@6㴂4JH tr%\Јsfʵ!,{Ѡ6 H?CI7r=IB@zӼ3UQeM&JKkQfnT:a-g͞i.jrk163{pPi״1`1q0}{v!Fk7T,қ-X=~&%Z73w0$%2MtW|VT1+~=Ϯ+BTqȾD|`&@OMw=)e+r`? 77k4Oa3#FcN β{_z7( P:~\ ,c,lנ Uz%`LgqRe'Z,[ȠB)`cɡ4ғe 13'n To]n9}7L,b U>xu5:p6kqvm4#3N/PJ9/:*gP[P W,{1vQ_))np09sH[ [XAՊPE*vS7T[Ƣc\@&mN$ln[I-A CtКQw |Fnm34tB@C j~.˩Lhrji'ic崚`!y|)r奸 _}Sn;@o^9G^xw L~∋:Z؇5XPSM>גWKjj9cnM!5:sԤB+[ @VۥOJpvU!gPr8`,]NA0a,kM05ۄU{W7Y@_2u:({qVǣgUؘރB4%-B%\o"{myn¬&ᝩ&E6YDMewE|39=ۈ$HQuz0sm0&wL*・*.S՟ {Όh%gD{)Cw.xk@[DD"o^Rө*p/}:Bp2x$Reqc Ft1zf!% 3Bs4|~4˯j4ìg |vm=(ܒ#ui&u|R|{nUSVAL-U>0xи"O"'üq~t JQ07ͭ$G[et}%։xO 1c86D9DŽRn&x9)9 Ȣ``lZBKi"$' :;6cCvjuг;_{H@6)mw5v¶vt h$*8ʝ7\ QϘ8P=x:FY^1Tk%il箶 ק&_LܟE`8/lڪGuPP,nۿ&3ȃa&U|ZqU0y G-xnYJ$B~_"8iB2JuN+=32:\9n?x~ H藴-H29$şts>[g;^nbb ΉnB>>-?c0sãΙ& .|}Z䟯Zg\2(:@Z30FMFS4ŭHA/᣻\NKYQ|} ic.rrr7H/ߴ`[X3BйHy% Bt;a jB,` ¸ )gBYQ|~Pzb9@qXx># 5zzۡFh|ވI>]]p˹@ 9ǃ{|ߗ0$5AZX,^vGEwo> IKB9UaaR"ڠZ5v۾=]xGu$U>5^JF(E#x+tp_-&x#32x5sf"pa+ZC] k>N!5/:'w7O',~%]ɀ2TJȠ >OP fh/#Et$LEt!"'U2` @e_E.qVq2 A3 ('@c}9@Ast<+10BB23uS/N H.nY$53c5Nˎ:N8mvh1t(\Gm?>SZ-ootOv,m<8+-Zf~/28=^ m6Nˢ8,?sL3,%X1́q ÛA%,@AB_UH4{c-D(EZ{!91X dȐ `3H >q?HZLoK<~aUO/}6MҷtX6-I>\ aR/1",DPє1тFQ4`pHn-bmJOgk Y2߼8)diaccoY/a*㛎o^$`"{&m`I7=HI VDpCK,s$#mMR3|hLZ 0wd"UW=,}>GϬ-ԌgpϷ&_,wǙ8c̎1ǘkuǡ5|uz`♙bx-*nP;IԃTj۰gwwpy'[ҶmȸӳB${IRnI%Y/<ek3^853Q_ߑ]dtt<9ƽm֙{,q8U3A5ZTe|3՘B&%R)%7w ͫx0 *:u tҡǢuh՝Z-9P^t;cޖ'-F|^LX;.*O/:t~ {]"敒\xsOƧй"ǣ(Drș4;af[5g+ugԌZgM.zr&Mp"4D戱}!(('kلu+@0!S&a⠓J;f2Rʃp-A[/5#Z46]i!_[+ԓAW\]]fI77%uzi2YkeFs wQܹ8E̅CN2$𕂵5Fb fE;x 4^ux$[grHFǤ"90;$ p0AP8@2ִѹcAP@aӓ۷Ƭڄ-[F"a9d Z8qN}xB8xnt \K55a$tpSGoQb,Y9qT#5MGNz]FgYp2{7X}.z\ XJ_8׽/]-D`%^\i6$IMqS qYL_o\o SE~ZA|c(b5c2&); B#L9 )(ԝ$\d B$u>{7 eE-6&VJ!̥CR݅nEwLH}q( yg&/sF$Yn׬BNFG9%CH׉kT ׿ihJbh5˔[ni ac2Y,܌gGŻy BW7e9T/(Rr=-n;4;P]Kն斮!P$j`DBF YL|8'zlw՛nzmou>ȺV9ҢI\sI s .U}=?'8T*xR4[YYoP\ot _?O?~ӻ>a>Ï@f`(.P.IP ҸiMq[MC{b ȂF=v71v ò: dJ?f\߾zviA.LYj/N{ $9 v3,y6"ͅ_Q!n\Uc&猟缎ʇ1f%(*Mp8m@ v?m}%AF=عR Pm~l&?6<(>4Vgz㺑_iY%I0``fX48bwjIHMYr-J#UH萵D[dp%Y{5AACKWG=Zo/:M[q0ktx$*S5jFDPr cej"]RQ;փ #tZaD,\o,luyCʃ#,Em;nެ_Xou \ol`S[GSICѧy|JuCҿ]\7Vm~}u>E*NN-,q*PBOQ)ܛE_X7It욊oy)CA{orzS]{utSFSU_"|:}|,6m\q˼b%Le]yG~Pz.ϛvV9%Pn;ia{m3$uu_^Ŷ~Qx㓣ȸu5RW*T5ؔP?aYMp㼘XJdSa0%IC4_ ~/Ut1OucߢS2% yjS@䵮DxU=u/Hԡ w-Fփ~G룤r&[鍧Z< gAZ יq䶴u|t}=e]6-: m|,ֲ-ל U@S9$R!9Y,XJ/Y}nr6oUHI-*\}ulQTIyzs vj1 s_HK54HZtsP 7*s4` Kj|6&14m6c hǢ)c'/P"Rh[_^b ׋ ;e73/~ /ۋAR3-Xb= }B? Qu+ 6l)N+B%Q+d&Sjwa쨾>{s$m.,v$zzaRTNMz<~u8|mUb0T(>@?-axюzkfJkSf.0~}ol [⡂H%&S% i4xccAܕ*v-bH>=ۛԄZ*II%9!S]?C܌i`-6DJW4as$ m tErJE>cMVLH>?GCF:0ZV([qF `r 3$ häҔŲSJRlU& #_ S52=UIP27Rf 9FAt0xC$T'k28kx},H$ ޔ<Z|5J^*L\ 3$ ۼeI J='=J*ZcJu #KbcF)dAU2oV@j%ŀ(gH>-ڪ Q:*:E. :s$ޏY1'%)yIKpF #a8rjI8j*J|VKN9Fg8,HdAҔ-̜r٥)dM̒)rl*%b+S% FyPiU˫%'JM )!a`2Tt^AYpF%I{)%#axΎ ^u/9YdJ8?GF6OjT mFr?|SMBhkBQ(XeN`' #6ߪ$!}Jb猍6c)~F=n0IrM-#axCb^̖Y Q^=䧁œ Goi #{g% IJ$'M(JE ;aN9FZ_%qA|^zЍjGK=A!a;j{W.:=h]T2IE(Q^E I6ku=GƃV"UUwgT(HuuS%o`x'!n^5(|֠b,;l0HP|]U9ei^ZV\2cd?C}` &4PGs3> #ax_3yO3D&4nؾ2QgJD8g?Op[74?x-)v.wh{-sFܪH? i&wB?f^.ڙGgWY)EBn*ҏF( qbs #[3DZ*l?2(NU47$T ##݃)1kU٢Z/0$ZJ97yd)Le*YɊ{)8TXĶ9F;*~tv MWTUZ!l% 9^g>k[s Z[aZFQl7 >X&Is$o,.!5ʅR*ה!ת+)#axq4'žͨLyqd,'l~BbmhAz <2)bQFG6BkV>)J**S3Uci9F' m+vZ5QN%N-nS5GQ#?[IыCn|^?:Toz̞?/tF|t:S)䞪/\G}ͷZI⬎O>wU!ڐzek&Ry}~ DP9j=uTP?_Ztvpw|q?yg.6:>4ÊI*~IeVhe!G 9z^rWbQ" H:gkLMW#d([tI%סxlRצ)D;[H1p[fBc4a09:H| Eo' 16.es?KsSx-g C !{Ŋ^SHK09ffԝ :^k$i1-{$4[4avĺ Ts^rnK~<fD>7OR:XHRZʬ2PJI6Ը2_ٻ6cWy8vqyaZ1E2jeO IHcòuꪷLz*wT)E@/D ZEɨcH&TɁXE zJĩՈBg}N1 de9{knn-yמu\blHQ {ny5y!l:۪^Ӂl L } {KDy%reLV2bp=ʅ$Q/'rS~80+嵕=)XBH6L"GьK^UӚ̶-p3bN]u[;~|eJ`n߄OV} ĞY{{EN+}*׷}?yrzu8ڎrϨvCN뗲QҪVnb2:Ř'F:&&Nj9Md=II8 QQ@b!aߋ)kOZPmzC D9"'ۇ-0;y 7`uBBd/zav)EԋLj۰w7-Y6t\Mχ EsN;-wNZn-iMri۪vq@RVIuF-#$񟸜|,ohfN~{Gۆ1W9ޓ`kL#4'5,#ci1و9%Y<㖉x6a5ċy]d3m}=dF3]ݠq88i!6 <-w>5ggB=s*L.s4XTs $5dMaMTVTK+#[*/Z}'9׮;#6#E5Ζ-% ףV%p] wB6ȕB;*t@Ƥ\2 FVKw2F"463s *VQ[(Z.9{f.l-Au))M+|]޸!;DdU㛐>-ZV;ߣ] OoɉsJH"%АriǼ|Es9$8Eܗ*ܚB7xU L+`ZKrcXEI*@U dFSCAe-gAKb"T۷N&º{>15*D`E)$_>Cg4#H:U9a|LyeQJ%Z +A ]A:$jkp$*r?@o_.G\T목(=:4\ L{e &4y䛲0cOLW翞\WSBy_'-Vs!*8HՂ#]Wapr(]<}6X|~RL{9̈?V_+?V> Ξ#8[b;?Lm>RYmjm ;!M3i8gk5NÉjfY> F*+W1[p8_i];]i֦gYi1k7`.#yb$K_#F3=|>BG=XX_?>~|?=Z? L!QM$pj~yFgSr8|jnS/zn`cVe?,l\Ȝ|ak yq^څɛ?O֫ny[ c/Fqּ2,y5aѰ[(.V* 8 4ŅȻqWxG.[}as^lNzdp86ܴ1$d7~^nț!|I^J[4Z*iF8BZ-JJk*$+)Vu F'( DsPi˵EFC$ŜgF$ 5Ade|9{4XyѠ4[T y$!G.F1)@ E$.1^EE+2%D> "(o$C$ꤼNchP d%DՑ u=*h$Q By PIaF'e]jl!dT,W]y &Ehd,Ud 2 ^XBD]ʻ/KhRNH&Pov'\zuZ\7"R1%"e9觕("+M+ j?YFA^xx8iJ*&$XĴVsEQ(G4Fz!R7i#f@P -HLV~k0]k Q3uqםC贏t -(58:2DEp6()Q)a U[k4O5PWv)כq-罥BeDՉKYJ{Zk]/ɊH7D>yN6 B\0~s.L]_eC{ HR3i3ᙩ1?hjk1RS @a2RA.h>}8~ӓ/į= 7GrZsykLB"b 9dMHտ.WDl[ hN%'4ȍ"&=ʃ1P}=X{ , "2KXGьKn#qZZޓn" 7xɝΡRk|rPIU Jho=^,Q#T\+VbLޓ|8e `zDp; Bh¿SIמҵ1Y* ]>{i2Le7n0= uI qW:(ᕈ٨b{blb‚*H$hӆXs;=Yw$aI*7"hL@8h$(rK|D0Jbrε!abu2dJ*HMJS`) >t@]Y&4wS!c !N:I$혒vvh1"UmϽKؖlal![l1mu?VP#e&\'YAR\XlFrD`aZ82/ۑ0`  BK"[=R$!ii(8lq_=b,7u!CJ" _@xkbG/!C{>>U(ZI1gЇAonAs<(%i!SB*%v` <1"(W n0h:o=yt]k-sdҮƦ_rygOOH\#OAQVyj# LR֑Qr?hz)Ѵ#1 '(s!¸ һ %A-w N]FzwbdRoY?C?s"ZW$-mfGy/[D.~(1qēsm`una2*`&ſfe=%PM1"7NXS&kgԠ2SQB9ag#rv3LߟMxޜž-g"RVQDTrd=&n.N-qCz./24ORq;? QA\Yί}1 w w$z|z3:_⼬$_;{!oZQX8~*'_<@rXn]̚g3#Z1{zvzUv=l,n7Hlo}%CiU1 ۏׂ4iI#4 D5L,C 3 w0bA~ѓ1{7w>*#G/iԦJu q#y`o'x~"[-3ãs ;5KS4۬`~Ż?~xs}݇ ۿ J?LGp4Ο=c߷?=崭q܈=^: ZXun cVe/d\EȜ,~{3?&>#=&;j ~9EvSi2wy7%+jE45J> bC< 4مeo^[1첎UGZE_>D>2*)d:͇8/r=&؟p M<+ϕ1bei4֏arp8R7M`0gx \TVY"dA0xNa)u/me=X 3@uj%4q;d"*`8KdR6 ՊZ'%&P32"Hi:ϘPI MH nVT&YIi$@P![ "Q$D$P2b`et(VWaO4{F(y >x<.$Hm@Lѻ|BFI 1^YE+2)HE= dq+oR>nuRNq#)` ꨀ $%D uTe$Vm&6)ǜa:2 ?".b YjpV`G{Y>1d"L"܉$PPɘWK/2xa i]KhRNq'"Zk7ƿ hkal ́G2*CE/_d؏?uQ]4 `"W0/#Igd.9ǜ+قylͼ}v鋯6Sn4->e.D%&zZ* Q!ȝ "D锨W*ɭ5Ƀ<+NG&r^a:~Ny0KP|b2!#yVM>ozwz0>5R6 iI/P$u7y35L3?/WcAKqF* oMWn A@=FgyBǣphPFiI)hr%X%K'P|TRx)Kyʥ,V-,6JE"e8 v8ﭡ)sYFL VRRƹ b IX\mmT%3⩣sNL+Mxj;S\kmD\9OTkٶ%L({^:ڵIy߈Bg1 ¨!m*Gw$Tsl) EJ|r:HwZ)]'/IK/P9nXWVk&JB R$τ`,K\M1*sBg1 .xpR!"e($NiT飞Le_T,\[k|w ftW}f\]Rxiէ tgMInڬU]9 v63SxK+uuk+Ҹ*nmQiR07MM{p 1YPěYz,Ɂ=K M i~oB :%ҮקCVї5N7?nn%{4eKX<."KֺHSX(|%+‹&׷-i@ȒҒH~ m>|6(yЃVO~*)r%n])ܕDJι+XUZF \ME5[_}hS"uJ=.Kl$h.j_Y1/|2*Nu q]|S;M d 9 䂍ra! Lǒ߀B.mkr7>4$@hʴ(WhiVŜߦ >CwTC bDVCB1+Cξ$RAM!&A)x$by8Ԅ \4*E0ʕqȿ.ADIal-Hd^TYԚ2RP(D.D+=ɡ1)gU.'v7o(5](^עb=ZddE﬒*ydZ6-&׽zjTïllx4Uin/*2_3ԫW7֢6Ο]x01+xˏŌmnkP.zY~x;^L1Χ^Rl ~xdBЃJi\3RijݤAF!1D.(]Nfd|Vӭ<ݴdDx!/&={e+$kE8bEW}X1RpLBP쩳k jQQH;٭"HkKZ7f^}R*1@\$'?]o#7W n fwI/;,֍,9+e-[-KxV]]_UAW#vwlOr ^wVoٮUF;}CW.7(&@DŽ;Z~r0֛7+mY=7o?}KnVdA]l*Y!6R),'DƗ62}hCHɰrFA^%'wup_ Rb]rr(Jt Qg]{ ypdbLiOOʴOG.RKꗣخREŝքs-YI盠Rb]OƧkӵtm|6>k S;e$s;0..:Rb-q{cR i3(d;AVqVo^4OL2x}sOεX 4n/^`d));{5ABҐM?0"3'~ q[W @[qrMB]L*[I/0yyLHnJڲ]ָa6]<.l\g;^ m\luYk/'wRغĸG= 1E5ؐv"OV4LoU3jk2bYX̒ R=) }B3xhzZz= mt׌-VWZL?v?,\]Pkzr8fg%~zAA+-@˵? 4xL (Z@j5 hC{ae.9̢m1+vZ:=pD YH~Vd:9e2$LX" 5YVB:fdoC:dɘ1mhR pD.0DKme]HUp"DhC蝤uI.%%lBb6Q,o䡬.dO$%-[O'd̶Y$lH,sȕmPĩ\4 ʟ>(ĕⵘ;X4k Fg|u!ޤ __Jǫ#rD Qk602,-}3af'F.DV&nȬrDVQ9 Ӓw?%Hs:oNvm^z,kC%]M'7)7;kLd;Yiơ;ۭ[-ZtgT6GxvW\\5 f'Ϻ:ݳ`uBBqd/5L ,jSؤdRۆ6dKֶ Wdzv[ЉY N[ wyeVV]i}(kܭef>rK I p996svᎧ4K4hI$ -nFf;cJLdVhǁٰ,OvҒ.htkF -Dw/( EM oKb_\v) ZY(u9Z)Bb3Ia4z:)"N LnwϞYbBͧC)o<|>¦Us)]Rҵj?"i|zϊ&B}l, isp)d KDoAr"3DtܶwdƝ1=w2O(ςƁ 1cF OQ]^uealRVm )q!J0D8u)UҘr|@8͝! [Ά >? ,غ/3NJL+Q~!>?YCcW)?t%޿&mzpzĎ|O???ȅo>{:! \Hi Rn=m5v547ZZ`%]S:6v,qajFC/$|c Sny~b_i]^kYywYO ̈́mE4PZn?^`B,ae5nU[1ܲtM<-}o> F98+\Qߴ(~` ia A~^%@"X\YK(8mc~'ah!>"a&b0B`p<2H%gESπ) bEԽttd Z&2( 2 w4 .Uf139JWYŘLT"AX},Exo xcC|5˷KMmLxKXaX7#Qˣ"0S9 ,yp@ϙWyއх(lT&Xa,E@:n401>׮Nqnd١"3}Q{q'J:XM~U>-֦A{u9́5|+GfeRYĒG+PJl)B%|d8ZCNRGRJXZ'~9L:u09d`B ֋}2I{ J5u5TUZD!r},1 YaM>;\]Z,-W]a  WBd]HjQE(Ee3, .ᨔath݃>,5hk\ { jiwzٓ 7CFWU GP:O`NPWC 2W\*-D0v-=[=3o*=ya#c=gᣡht }q|`)JVimF$CD23W@xՂc\+52㓋5lYtҬ[v\DnW6M>]!b< {Wt]1&`W$Oz&"׺j^64}6,qKj丧C)-znA ܂&@/C4֗Ra|u|%;,ҀV{6ԳL8m[z8/8a?}r&_%X<;H#Rr+˽_֯[,Z҆ K59{63.IRy'dY\\\3d `1 /6L+ŵ6ǃ;`BX+AOL祋}R_;Bi\FJl=EG11A[;&bߟ 89] ⦳tU}A'+AqURPyU3F`:"wi#\KLU&=m8WB hj RF?jnTsz'LӕhX]n0rg*K@j|мrSnw555 yP'8=!eɭNh"\Ks$j F 2LdiU pZx[E#gZܶ6 ?ro'n[+C Д*g´ ]\m2YC2pDR ?cIhMy]w?b;#OΌr<|ppeuStq!IRDBDȹEZrίf ޙ4<0sm8 erR)I.8VEF" 0Hp xT*ѫȝqj vh7g?FtH/,]K Oin"tjlwW탣r;tt"t1Kt t.]t}=Wvp :g~ JceXUf_Diin].Dy q|!,ϠPt_oPˬޥi$h/9b. kו֪(& +s4!b,%0M{q{!r~1ӂ? G)BG_^&Ȓ]^a0LqFϒ*+_IB*ͿRӄ3 ̲M) _]Ʀ{Yq=e3y726V =VHڬ2 BKO{dF^f~sƤ.>YGqTyY1M۷yD=n [qaEDy7kzUJ>둇5Ji3\%h/*Akc12Vl=f@OdvS CeAr"kY"My7]XscxW7?n\J}pXf6ㆶR X\t\fdc6 ˬ?qv@/MwEˋ,Wr \*6[;qQGVLj 5EZDmD(?XqP6xP!)/+@K?k%ն=Xv=᭑lJ-X[z5|޶&-wkJ^|!h;Sܾ+V;*7Qmf1VhIjc>^툴]_̋ΝŦɡK-e߷.{|܋ /w%6_PI-A5$d T#UKPmkw#6$,)aYZuG7Ҥ, V Q }$ )Bv Y%WE<7 u!q U^6prWfv&gůؔP;gܡcrǤ[ؔ].v[#"4] C!8j֒J4RNq,z̝v܈&5F[ :aFhNPƖ5Ѵ/_:ϩ+YG} #Ӟg>54/_I RPe69XXU&0?ϒLs3IC Aa pKrO+/$=ߔm9$>U"]fI7홧NS{֜}zCT-trD{Nۧ)<#(_lB'ߋ~.0K<_BGE:mYw֣,:6:"L+TD^%IsH!~%M.[\ԓ2*Ly)GnfAQ( =wf^qε}%30|Fp#_ꗦQ!W'uLpB_7B0$N. (}6 WM`1\pD[)|T7\eEP'&۬IL]Wۛ9K:N" I=>gP6zf5ݛf 6\r(r*](s ?׫7ǼEKSC-XS &g\\ssPAhh:xJ ]$ &z%\H*5)#!6:)8VD77"L 6>٫A !ۄjo~ʾ(ʍ/cf3A ʚw. W J/> ޙǏ?!H jO7Vdo.mx)Hl7ʊ[Tw|.xW 5X"6:PT&P+~ dRvב:_GH#uw \+$dGv~;_GH#u„ݑ:ב:_GH#u1bLH/hp—߿L-&g*70W5*S7z4˓Gfml 9QyEwWW e2Oabbw|^yFn]l~\pDؑc=gP79څ]Ⱦ8L [Ql 'tcO /,<΁U 67l(2O`J4X !zg'񪎶*rD6B=.,|FNȉx>?%2P#e) EISۚ oӴw2; tYwZcW'6(( CMF^mL;i*רMmylim% ݯ^T)P:/wN^nc t,q,)%mWV$:}-=cK*=+xҿLlY~BʋBQnxI k߀]wZ W cf(1/0II ^FҔF3"cxNܜf!nM7=-$F}Ys7"ĽOKDn.׹'sh\ Th% Q\+/r1͵@1&Z4#xDtq9l"~+ע3Q:ymF9Kͨuz*,g'ҩJ#HtQ] 0W!(('khr $\؄ A2 F*5(K)µIz1y2o9 =?- غzZw9LZ֚f bGԴP3,k5|U6y)2`.rڀa$<E<Ϩ1˽ F0c,csI+r7 Ml4Y†;"E RrHd 30+!Ay ,P{, * S=T86QzCbupy!A p:ΩBmNOӻ릦FRk 8)#7(1S8Ú3Nz ]C:d1\[$7_d d#? ,pfo_.I>B4ҊUe/ lb$wZ}N1Lƅחi f%b)y7G}q\%[_/! 0v@)/6U6Bgc1[PSUiJ `:q)'eUT )d4&j<Kc5ϐG;[U "EsB\дn#; q${%*6vB6:aaɿ%hk\ *#@BeON rnMnn+w mV]6Cs׻u77俇w,oe@3 QIg QW$ф$i^ v1 62(@/ b0XJ‰'GLnm.q[Rq&f؋WLX!qE,YJӐr+[]0Kc*lPxt5g"Ľ;ޛ,dUԚE1l@|{hL3 e !DHH!I 4HēFJvtD %BXAAof11zNACnWpP\ߖ8BRY%) . . .HkH EapwapwapwapwapwapwaprȒr,Iap<~!)wapwapwAGctĴ0 8*0 0 0 0 0F P^2aYn}lҳqb_;BM P;T`LT{KU%H8HHo(bHqzIS)'Dp%!4+PRNAiqc82"@Dk3ø)a13P&)Ѻ^M=9j<\_BbAbI f^gALG«J#Vh *Ӂy u&t洨RbK*YVNYx},>t^, 1eERT;10|5a:Dmtk dϊ(Pʍ"6FudXD - `S*H $yYWSjj;xK+ofޭ[Dy|𐁕E{$Vr'Xq& )rf 9 6X5ur55!쬭" bJK*7 #] ]kx2HP:+g“rya\iTGTSBnS@ Љk*#Z*,Һ2ȩa-P\}6JN:?TuZɇKw `oXo[cb[&xyo )o Q?!HA7]]ŏj(S89ousn71yڵ^zKI&~Uy̬߫enݜ b i7oSg8ݬd@`r˯V2czDpp|Cee.b0g u:?a8Q<&MfK+$j/ikΟ;(ʖI{uەmw+YNL%bpxWLHMԠDrb;X_ٺ*8$DtR׶kƨjdzI+@ya(jP*3L W`m/ćW cf(1/0~$p$UFҔF3"cėW@!_l\s~ڿ%8y ;,3&Άਫ਼L>!Ȉ^Qlڻm|x~^!j/$X߃^r֜"c ! hFL]J!# `]`3Ƣ56urA꯱Nq!VTB<5$hS92UⴑhGQCt(״!m,ΰ,m JŔ;9'jİm~^l!5mҜٞ\py5g)oP,%kow(B$\J樓uz_R~`%i90ktR2q᧟Sf=MLԹkX T<اB0_p-v)9F"rvήSWhLݙ ­< ޞ&+aYiC(@4V!rqA,2?xGE;Lc,>Mg}oV7|3uC x`fkv~lk<\Z2^;sCN4+n#5O{Ҍ:]ȻYpTFɌZܚn촯oFsK$@պu{ӝNn'ta5$QH/i8 EqYeC۫(4ɓDl߶+n"] M«^N+8h#2nԞRTt&H{a0)`?{WFd OXG%XӍ5홏F!%=/Y$E*R*#3xp{]޷d/)#)5E-_Igo.00D"{M^X P9|v B kjlkL*`gl%rbVXS/NW7qS hIjw2Aad p%DN )[M"HH}[%<)e^vHƦ'g>domG :8to#سq^'GΪ@T")׽OQ .52Qd *%B$LHϣ{ c=>M(gsL'Y(et)\%ǸV9kdIw(9qϠp aH*㳾_֊ddݥr1m,{8PW}4/HuŽ p\)}c:N2&S+K,.|khnVC^'愴!cc0$DHʚ}p;[3M7R\H<>ĊP.T4E"B0F&KJqD.i ޭZP z؞.Q}QHJPu<{HӐ#bGwZKw^ב YNXIO)]4;/NHpUg^p&I.ѧ(E4m}Z՚6XrN2Cp(sQX-\} IT&^6#g=^7?`9>~ ;hMYZ Tvtn7ظ%>g 3 +'gwnIrI[US_%W,bfٹ>O7[]+cƹ]M͛/nѻi'odrw7Y5pكrI>7wlҮrndzٳw7u.nniGS'To ؋%.ۥ\aLy~Q|>>=$Bf> &CRT&dKcA.%¢-|+'sD-$ (Np”2ZdaJȵ !u`T6޷o—{31g9cuV$+p R)-뾐@\p-2 l%m/i{IOI{vB0Il'<"0\U:91RFE`-~X|=ЈC%ob4&0=8_ qwHPF2hT|aKհώzNF3?Naeяky_,MyO:ೣ8{0oX~INXǍ@3tcwzŰ)HZιM9\u8gsk/ͷ_ϭʨ;沾ܻ-RwG\|v>z?~xO=_S=8³WK]q:9H=N>{ w]4e2tO9_R=man/Ji.tikΩ/_˿/2<Mh-Z`Xɜp"-'tJ(RkHfY)"e (װP;}E.+ !ZMVbId!araTN+gN.3qh; l8W\Bk޿y$f~0Gg{f?ƼiGcW5qʎ< ]At{Бr{.^{(=W̫%o.)IK*z9.;WɋD14gDbv !!!8n<ݢ 1Ó"Zdq=y$#5t5:gjxi㺦^bGFR?BDD5sLS%OvX9kb8x+?ٯ+vKO&ӯw"wN Փ|7r{};Q2 mۓB[-H̒!̂\td*}.S(FY-2E5K9LD|Ϲ?upػg/.0B?e<3G/t٥K?ɫiy,K{4u_wXzk p-RQlzz"%:,:,u(ꢵ7|m^Li~.ot[6Szwo)54gϠi덲xډKi7}p鑂K ~_me~F/-Ph LBَTd"+^m-m}" Q@xyR)Df','-K}flq t s|{zo:?e|"]O+esѠt bxW*VVJjOfR9m![w}?U;]gbeU==%n;znv6XX_1eEk:͂Gcͯ 0वZ+</ExAhՐ̂kb6[݌ь'r, XExm-whƓQ\M7O [Lˀ&@!܎k`{GYa14wYב.{_&bll2yœ\GZGj*z^K-kMo_]v ^|T[8|4N˒\Qxs( &ÂJd)I\9'VoS0Nd1G)OLj)!(&NvhT]?ُwZjV:tKj[t@ZLI^Fq=2K(SR)*ddREgN>3VL$ݧ/m"!ia2!47cE)5Tkcv,[Wbi@`fr${/)!dX]Cmb `$!M";PT0 ͐7`!f,4JIJDW'1f"Ñʥ;4܁ ΠFcnOc>;=1_2ET#-Qse<@K!:K4V@^oTEnXbs,ÔDSmj_h %W^0Qw aӬCζԇqU08x$oS[Y0s-$_z,HQjz0N$`R ,I2qAJuB9Å$  y˦v1%Yx$QDE@RH.ܬ<'qyIلFT0ȋEڧZQ $WEA8hX*+T-C^b-jIZ3W])4aȠtj Ң_֭b_|[1ʣ8I`1|8/iNBN?!ô0 N&^]c"uQ6~ QȷZ]\6yd-Dn>&b‘J{q&2UWҥ`ruA"ܺJ78bgP찱ڢR:/QhV@ide+ 탕EJôJ燍Uh<{  Edul`p{ v]ڙ ں!""+pmpe,{%!z=9hP1A߅^*n FYuE_HоLFS{X|*k`6t @;k1-P 9@g@!t9hs^u@иchdr[y-@K"mQ82Ggm]acQm\g$0,PMK4К5'7t9`,o+,4LEרH RhW֦n\GoEv½Zz7j[ 3$]$/iIުD/ %Z*Yu0qX 9yyk贜vjW?]N糲͹u&A4r@PS\tBÑ$0z`2o6:7  ;ځQfqu$ڳ<54灣dbs@o0|ỡ=7fmvX4Vr)p A(N( j=c5q^ku&id jV"bʅs }rp 6r 4J&xyme腝 =E =RŢ4֌5 =\ATXd L.1#dnІE}8csJ6m{BFZ{cZ |,$=3)%&};WZK#\5%i\4ҵ5ΏjyіaL Z#Aj3qp$kN)3hVAtISΩZ;kcDfN/!FINq;LܡzCsz޿Y^3[j w%>!7pC eYݐֿw,a{ykdžooV)]gܢܸK\hh.\a.Fs ֪\ΓүF²ֲָW>Z3z'ƺkPc@JD& L x;V}ۛtάz'ώUψUGbտ=VxmwBj5ZT(y.{%ѣ0X:qI!ÈD9Nǵxٯ;DI5"WUjqMʉecgcڡqOkҶ ZB[Mqb/7F2_"%_"%_"%_"%_"%_"%_"%_"%_"%_"%_"%_"%_"%_"%_"%_"%_"%_"%_"%_"%_"%_"%_"%_"_^R3K_j;M/5+ _Pry5Ҧ.j3{RTM|C+ԁwک|* 0uP EN9Qlg?gj=CfiOo?Mf}䟃s+j9d+~{qq1`B1jJ5D߽eV*x^dY傘k&)ϸA❕5쇘w/?Sjo.mu:.?m[䡼uJ,lv&XHL6&}ݧl[7>ۿÎoߞ6bz/?߇#+،nפGXwjO% 4rlj3|՟5X0`f{{L<.C_IuWxֿފpĭ'n=q[Ozĭ'n=q[Ozĭ'n=q[Ozĭ'n=q[Ozĭ'n=q[Ozĭ'n=q[OzMRkcF'ٲXLTUڃh .mnay,[wq0 IsTDzO=glMly)ΘbdֶOFq-5I.C;ѡhoRd<9~42t(C[O}8CF8L$mW},iQn;|vGi*!l nFj?.{G͜#UnGfo7ji֥ͷ#M/Oඤުy8M$e]f\z]O涿&fmpJlAӖ6뺛×nW얿 jIEg:0mt}Ec\O/Fvko=ٕY9;#b>?pͻOyN(<?rvϺG\oAoC=#~(M~:;4 ΍~RS1+$Bb($Bb($Bb($Bb($Bb($Bb($Bb($Bb($Bb($Bb($Bb($Bb($Bb($Bb($Bb($Bb($Bb($Bb($Bb($Bb($Bb($Bb($Bb($Bb(o`_~;;5oީ\&=:-οBȫ="#ov{H@ iW:L{TiQMq@ {ʗt Z7a2U#)- N?* ^_nc?v?|ۿ*?vZ1sw~ôywi[ f.D+OxD>Lo)Cۊv9_EySd{4sg8\hҬGڞ6!1RUr sm\]~X\X2 0^1c n!Fl((瑾6PQQe]tAP(}.LОR}N]ASyAM\e/)Xk4+~ŃcﭜH_Ǿz%?n-K[ytJJE=" UKiK/ҞݱSٖu?i?t]?~lE  ~ًPij`iq)Of&&@>Mm5f'\Oz2-7qϘ>rǯac?vt7uolzfGw#{Ø|Zj,WnV[y|Ц?wθPiX7#@'^3u:O}<P~*'o#-6 yk.)nS)|jQtNdy)FWobuu\e7KonDgi\t)BO:FzA rM0V']=9oEyVm;ue5VjVgjvy# {ÍHVvǻ>+/Q/c3=5@Fy^ɘb(&sDl(4x0|檶Ϧ"{BpfbZJB,H瓊id;>Rೞv|4sNWb'oh(l֝f4u(n>bhxeH {| -0 ܖGW w(K"B$|qfRY*D u%bX%v 0;s}< 7{~,"~nj;XOqrhǑ\ՑH-bśĝIY&e.$WJY ]m4tJ l C&E\"Iq) ,sv-s O_Tu?wV\MHnmł$MjEG[},}/ QjYdɿX s4wr_|eNT7oOL5O2I3ֿѶwhgY{H+|]݌)Xݬ\-V:'^lN`3=8 R LUWWW׫("ꜥÅt(8Xɢt-2GAz[egEa0Ic,N4g&[F$S*e>MWHÃ`RN1d(븍 g_74M%x>穆eV*8liLogד>?;GWE.20׽AKR}KGU䃳s3BU$zHU`H%D%Ef9`X(`bɇ͌[х9]JPgZ5V΁P)fkw6tWc_1Kٰ>ܑcR@/_˗F?}tp_>v>t1Q_'` (0 6N Ÿ;!Ak fb^5zJUkظU- ˞q#Sz}>K7cIݫRX |gڰ.rHq6koUDTPUJ[w! t*Avۄ=caeaڔ|Lio$Cxmw҅B3'&c`ZWӘG_c _4h$`) 'R GL+6t^k`_XڑYoS~ֲ|!˒j$?33g qύgu$ള0ɯ&uV0#)6g sm|QN#֨\cTnl{T`&TzI4j*oPڇ KC X2F: lp6HFjRzڨ`iZ,} GWr/y1s}urx/ oC=g Җv v'˖"R]oA<.~?=.N+11ܘl[ z5_>A9-oU[k6Bժl[M4-"{MD2lo)&t9ڗniÍ輦 )jE{$lUB}W 7MXJ݊F\ wWCXWCWW–a2qXg##nնS5푸qUt_`+ mJXr܊W(njT P!4[vNY9l;̎|un~Zn>a,CCDB}r6Nʯzf&gLJ#1<#];"Ĕ&Oo?n%{$aEވ.S"5]L',l+Ӕ0I_(=yəA1gy;@2R11-R;<9h1X>㛝ƥ3 t<٘2奼\qXtg؇;# +`N+Y |m64qfKhהQ?7}7ڃb# $g\Kr1{F mm2[K,5yzdI6.l4JgEV@cmTg*Vz!ġT0w*JVV^ckb E;|t =ϰ%~O0o@:rŧN$bQ?e*|R3j5A起 ˙4 G3DP5G A)F9!:%p'>yoS>Ep1 6eIXR"քjy{5$hS92U'I/v557F"Q(ueb1oQb,Y9qT#5Mg.+:E]H.z5KU]!bY?dC;W~=kXP,BA3{B$9k.9G,L")p@d?ǩ-})-ȸa?NIעG;V( B#LP,ec:f<O@^tB_{J`pVtb `b! tH]@7@NKuAq~ cq:Χ!l~Z%ɐo&BL}74}_!oSçr1%CGam-9{i*IYe sBeV*ϦliLogד>?;GWE.20׽AKR}KGդ0xTzHU`H%D%Ef9`X(`bɇ͌[х9]JPgZ5V΁P)$fk$> /2Wc_1KٰܑcR@/_˗F?}tp_>v>t1Q_'` (0 .Ptg NAk fb^5zJUkؘ99W02룁!\9^R8¤U),c>GmXnn,\DTPUJ[w! t*pQ}"rЋ6ƽ6.^CwUgKT>1(qE$ gǙ2aV~ПAF=\)ZJgsuu6> Dy+L AqXIy`'giuI¼X4 Ҝo" GC I,jgE`EIDGwAZoPar5ڑ}+G;F%-Xp'\PQĖ]BQ ?g[5*uXEORubTKQ25 L]G{N,^WmQ3,TDԲ`48XR^׌ML!K\v5h]2NfZ{E7tѶ^ΠB@AtD+A+ŏQҢECڇ2&&VQ@B!gh՞k:F諊iZ+Io\X梒~i^UIKzIcݫYO#+Y5iw-_1Vye0v̷{E[L u5M@8(]yQz˭xCO c1t'01,,D۝HQ\܊+JDn֮3w &k o`*U4AXL!kCm`AL) nl##/v~Z#~9n|Mgz6iއ?.勹sBNh𗲔LI^ԛjU*]B{13^ 0Jͽ rFuTX'NJUgh-JIf`H1rK%bdF OPǀk-s@ՙ(*08U̲PYxVYv$5{Jɻ$/E`ھ F/t%6F C1`13 jL+bJHH$[*,C'ZK(Ib |9h/#{HHLBDNܡ߹/hWˮHlT3Km]vAvك ~ p|X2ˆ^FcD2Y+냩+jS:-a$EV 1ldb kLݤ?-?SfV֮֡b\P]dYVM'Kp3"` chLJnpb΂wDðq$$_dA4]KZ0bn;E)-5|nys84_p,YzaYm᫻+3Va 3vU'꽼Z+ItQGܰor8mxv *RKRn^$i6G8|3 \Qbb^`-II RFҔL3"c޴ +sX``͇Ӹ3)iƩI#ef¦˪NoX )mu N7 JiJDoSஷJT,+>O'E@A ~wˎ=V@}U?&.C uuDVhQic>u%MhKPf;YS8n]d[Z9e:\E5ʜ1F: lp6HF'g)Oc.*5[bG5 SH[+m_BU2J7%SUaoCm}E ܻ=oeb&As灤iqc]6xe@3 QIg VW$фp0rӘG0 (&PfTet`E ***:f6rsV8-nJ]E*_vE,dY |Kr+˜RYǗbLU!^٠Leƃ=~$ f|mOz*jF"s6!RHHJ8TInUX:os9ЯLKn>],Zr_L RGJ88{q_lq} U] )>:Z{)w[?'fG!&mp ,H [_QGG lQ:l]A< X!. 9* hޕx< ')}#)2Hp~u'(~iL3%y !DHH!I 4Hӎ+3%Ή\K'¾-̛yA ij%du>0oଡ5+g -uַp@LI"W[I8j1]O8IT2UN>L 9Փ/ Q$,#늫:8*ꖸ"G+Rթ)̸qUVP 麸JTqy!*yC fB܌Jjt+ u]\%*f quqűZV8V);O\5(_{V|un/WD{o~ko2͇zX^a\a+.(Ì1zϝ +xzf70% 2w\3b^zaxo ͇> P;ax_j%cڛ>c XƵpeˀϿAe/N0BD"X܎j,oM$j:HTʂ&>#`T*rK X#r3*KnF\1]w]\%*I|FqWW\FnE\%je}5J]էW tOmzF޿lEOxw* (V}kCe0*T ]s&i"(x7jpoffJ #E"! &+IB 1 ΅ ; cޱI.t-I)%)d얌ݒ[2vKmE##ڏ@Gӕ%cJn-%cd얌ݒ[2vK7 -i'%r5?t?$Qp9|\Ogw*qb M􄫊q*LrL*・*y9S%sJD:Nt(q[#"'`TdQ)'!4]!cFYYA 1zf!% 3Bs4fks\.v}0Vga.KG18_|ߥ (_~%!GK:۫ W2%??%^Vf& C%7iQh 5T(V8dY:4ԗ:4ԗ>4;4"h)e [;1 >֚0"6:AJ2 ggsL(QmGs"9tdXD - `S:Ӕs;d#g?>۔B]j4H;[ D5~p|V *GW ڮ ќ$l>ąs@1Ǫd<^ agmI WZrV9000 #z/CiXf;||/%t`?ZYk<4h86S?{|ԟ}؂~> SYa QͷCX)3zwknCo0J^°7qI'(/ݰȵ{7}m~n~>G/׿nl Ϧ?@/8f=7y4;Guj5Q{Ԋ] ةJ D?}HԔ1тFQ4`pH>96qC T><.>?-$(֮֡Ly>*ìȲ7o:7gK9&t{aM{Z2kR ,M}`iܚj|e| _^/k]8s g,y=ܰ ݕ0Ņ1Tx^^WL-[PC$UTjnܷ i 2!O7Nib)%)/n6i;b97GǑDtߖ̢ Vtzf:ڨ#5Z/{v6s}tƓsbuf3$ʶ?|cQ2|d0Ƽ͌)*e:&jZOE u|a={'/TZQ։褠Sjy݊~Pq|5-'cL Ooł+6W؀i ?rwQ4%G-njX =v`͇ bo[EZp?ڿ7Mqnk)fMU׭ vlذRjP{(V7Tf(䘫.b"v7&jz]D͔I1ޛD!jUiE<(VDK_rĉ1y(]L^=K G{FVjI:lQ ETXΤ N8b$4(vB!(('k[: (W`IlB O"veZ0^j #290};S8f)_ي.QwVXyZmˉdmWԕO.zky-B@.s{5$hS92U cґAr5pjz(Ԇ6EB`Drg=Q4@W8uҋmHy>lgx#4O8ogY?C0~5RB\^ס"GRH:EDô~9$IcmqM&#)3)Uyv;2RK#e kS`/?74oOqosum`%Kdqa;Tk..UKן`~d5$QJ_iҸ Eqeby`X(`bAjtҙJUlZiKr$%œGoñKE\ -9UKP4~XoP_.\o ?~}ߦ_?~>{&O ipSY ;Z?^k8Ұ|iXdA]2frnXBULW7qj-2)ay!~`ԛnр&~w+!~2+ >FixU-S0DC0g`~֔12K~VȻ*sXR|u8=۩iI4J2EZ0 ^Sg%1? 88KSەΩ{\G^H0XrǁΊ#  6 )L%$f1Mܫffa5V:N xcʦSЃ;Q4?#kEx}0(Fjon'~)j:hZ*nq]\>)GmR@ťQLQS@}IVzy'Ry)μGd^ʣ0/G&^kqNhaFD$c$R;%yLv *C`R띱Vc&y0k#chnDd6/2G]`X=چ\p=TRuQܸ[ #>[S34`Qo;!%2P#L$ *#z[„ J U%pACRH `]QQ87PNbXXNJ\a6ph8OC10^ Câltr!H,f}=ʑK/JBDs I^`bM Aq먰N Qy8%Z_G S\-&x#32x=\K\nɘ m6Z)(㾲Pe{7RRl[_lE\Mܴ?iF~}Ci~o2}0TZKQ& Z%VT'MBH&U42@C'ZK(Ib ,μ$R:"*w-e.pKl;%Xv6긯֙vgnxaCsEt( \UZWHʥ!Pjh 1Y800,Cv4H5> 'H 0CxM"`08aԧVI0DF啈u :lܫD=?V4;1 S8"XB; 0BB2g($^2:ł"msI&ŹegDl#Yq'K:Yrl~{=0EJG|d5q*Rq_ 2꺌T/' X<=ا_nLMnyYWL_F@E>z.㫝֔߆q\> ~˛ݠU{v1b(BPB 4BMNy߼&9yP+R[2E j, D^]uwz21l !fH 7g)Oc.*5X`U_=6~?ѵ[R-(1O+ k.)d(N=cXh'&$MNc35&PfTetE *(*:)89x8@M#s-u76euٯ]NSïS+ţp=0 IYt |BuN?r9(Y^r=Y MG,MRtRK>d)my1%[}8p /@ Vk ZeoEà22TZdI~qim)hed\jCX3((dQ0y~"{QԦҵq& =_z:O22w7/|f(ԡLqݒiTj Cm/o^yqu ] ͐*ΤL},>^pKȫhJJTw߳rR6L1I1Jii"Đ%+cV/x]\ U (G?㩟P'ۚnP $`f3-Qmk]}b[Q^xQ|0^`tɾ<_fՋA 4ĶMW6}|){W-h⼗FC';6%,Vkeۻmm˸-v!Oیf,4fw9iC&q.JӁ#Tǩcg-`.QK-9+R 3lXcg _1xԏKf͍\԰ngT䘙m3$*ŒZ҃/>S.>hgmʆ5Jʼn*s(VL,MTJe+8ש^gôR>Tt̋bhf|vE#>yXs ˶XPFEv^;3*6P3'n󖗾^f֍.uʹ뫳7-DvEۚa[{Gιec}̅Yy>%E SvoIu6ˇ๋w3Gꕣw\b)}qB:dɂ+0(02GH{/ꎞGRߧ!qƱfkȌ:E*r/7DAHC}<"ҒbFi-:Z ,FWtwg"zl/آ#2ul!)0 [Tmzh 9zOCVRcuwx9tKak-yWCj{X#WXQJ.\\cW J։Q\N+m#G/;mއ!d77bl`%ZNy+v˺,Q6ૻEuX,2͜IjM0հr /;9U=n9%h}1`o|^NY0>sn"a <駷gJ<5MD٨D-;yTUߡ+5UWwn˨z|j8U9LTW[oI2vA?BKKzq5ٛ]>y[FX:} %ےmǶc[-ؖ| s'xҒc[-ؖ|lK>%ےmǶc[-ؖ|lK>֞Fg٤{e=3%t(Csw$62'Ԯ(Xvb[!^ؠLa8΃q]gI[ip{ZH; 祤^")⌂"8G[mQ58Qguwdrֻ 锼D}gjCq}*j`mcI=.-\͡]D[k[g8 $FG^«Qɬ`[($C K)FD_As߂1$7u="`]A< X!.b0!GuhޅKp3"اE:5%n@Ŝ(aHfpt"Eϋ(j7D 6Ltmq94)ɧ? !DHH!AI 4HGJvtD %BXo[" zFf4pgY*C98P/gm#7 !?[̢.3&g )ʡMO82ܡc Ǥ[n]=;w'(N(QD !PC8E2&xD.O*!4+PRNiqc8@"]c0CJf/h AP܎1x Y~>%&t^, 1eERX;1| &LȱNpcm0YJqTQܦQE$aؔ $|1w /9ټ:%jKrE yoْ ܟ)+alϑ&#ԆYZh8&~`(SRA a m!B0s|V~Yx(RRQMVD$f^"WUi>zʶwUc-_/_5s^k .k ` U(4 iJuRUܲn nnPzfA-]]cc'mK(yZ蕲_-nNLG eZ30{-#cԀ/%^[+gi/H <r(OHr!KdFe) EI(0at]K-EɃh#Zk\ Y%}0\JX4V/O*>ޚ4(Ǭde6VޠEb `W9̅Rs/0*p8d(nI"*FId>ZH1\Jc>jK ^Ȍ @d[3f#gf\R qƶPeօՅ{ՅFwiekFfѭf5u<O~@az^X$C1`𩴖13 jL+eSP$IuFӐ? JڤB:3/2)ۏi]I;.95sy*Z;wluf["ؕ>5$:XĎHBN @i@)@JXB*4΋`##ap9 Cv4H5> 'H IYf#g>OE#f-5"Fy5b^[cQ-.`5hX!&e1D8&"S 7QzHLzz "lO0I̥0soES# sƭ.iuI %Q4ژ̳X<8Smfq2{9S?H{_dzUH蛡ʐ!5{R`-D(EZ!9)@f}r=9*Szh8@ǏI!6W٣CMp>ѵtc,Non-1?z_0٧׵Kg5ϭt[{9>O`+\~UNϹ|f#36yp$5w%<$ Mgf۴RʚMTcܯhl&;yҌND@'-}WݺXmxGoE7S$Cl;.ða@%6 FŸa)M Q 1#2gl97q͇lnC'RH'/l:y]u9sBʽzݦ)^R&ET1'Ëu]]S(D (R5B `HsG..?D-c":PDw-:]~3c1zΉ1iڳpuҋm+Hyi0gw I8<ɖg w_3 RN;uHKa|NP 1 hY>0ƅ/S :{rߩbOW4OA|!y9g2ſt; B XRLݹ)pc{y颹7 gE-6.VϥC*m\ L0!]wUe(0`H}tK;R$,̾״Wiˋ쥂b tiPڀw6z hhJdh;ש`j *aC;)V;Fͅ׳111_~܍'__dr{!'nuCvCX,A?t`dxiݛh;սec\ieZ\cIs` .5}W3W1hTLF?ƺvuo޾~mտ~ի0QWW`u30 .續Jw&P|u׿lܵ8W߸k] n&|lʱ캉_7ҥ|c  n?Im}Ǐ?d'vų!P}{d66߯|W*D, MqڂxJiSC-bcUIW`T`'jYYT`#歹+JM~\@};u LDy+L A䂳­L~A qqSwXy# GC I,jgEEIDGwAZo,BTW6*ab_G+wYj2**T;j4jn)jl󭎓o1Nw u~9~wӛˍꓞ8>Y~6%Y;of9E ԲV,v:QLq/uNrDSy0)dzNea4ey$˼} 5hki[ SnIYeh?Mp=М\oR=}i6LJ޶ veGJI (SǺ*t1k ctοqaɿ5:-]H fa^ج3@ 4 XKzBZCNfMC/ {vJ{?RX O<# i>iRs}u!Ol食~6~? 7Uo\qo9( >/iatA,T4N^@7d|FR ۑR ??պHV~sU9y5t?%vȭ;Xo1巫ss?뇾Xjf543衁?")vd@bCePG4ޞ{scO^cἯ[K IEEP#`P+:S ^ He]ϗ\D6F*ME1oυP@Ze-֩D(>r ]ZJ7g{2-tBM}d5GqWFM|3rH6c0i^SGѻ'riC_|f_-्.::vH:wZ.i;sxi^3,evMCAK6بP>2ږsJRJUt؏vڵeLvDȚe u5 k%X0AP8;Lm,IhhCMt^=Og# A]WKs~Sw-<)i!'??ƈu.'͝bŇrܩ$(e; &{ ߽\'M}gSLyTjR9-kUT+ _TI;a35d8?Ǿ=U Ϋty|Rmc/vHJ)TB)7)MN"^bl'g ^y( ˞n&Ԇ1˧of/m Z.W÷s"7f֛:qBnRgcih .D!aI:8 $@%$P*N8}uKŤ$dp2$vxR&}h$KBg+f6$1Fj&vj~-y=Jv)4)$fIJmM !ɘ F Vm3qvhv'pdz-ʬïWVZ8XQVc{=}S}Mz^Osӻקqbħ|{x2‟ : 5;!)uQ`U' 6 +q8edd<і\PkZ^&E#/I~#$k""9xPޤ| Pwc^;*xE$CܧI2{s"裝ч>.ߥbJ466H=-/!HuE3%o3ľ3'!]\7r Iow#]|ֵipvKBWRR R4:}ҳitzFitzFitzF9otP9c bZ \v4A͟t;0`ll :Mz}{n??Ԩ4G簩wKO,7&)V=\͒A9B=ӥ/-#%z-4|<ɺ|7{ǞBD,QGx@Ez-&W,!PmAD(-+E L7~)$8E9S>VlWQM*x j.}u".%ַBv)Ngl{{Z,qΆJbަ$uThamLbrH|[G]ɹ :ĝ}\Tlz(՞-K@#e_ DG,ՑuXkUjm2RNb }mol ]o j _qqЬ8? ;xd-]@S h$DVN\,-uܨ^iAu~ Y fS[,9dBv=C-lITDruI+q[l%\vձvv`_%)PdR. "pʭk?k&Dkd[[İH{hJ`/:irlk2d),GQBޤH,)6n'NsǞQqv"qÊ/dI}Z0ݳ; 3AkP*A(b[d@P:;P̘<%[Ӷ>%80AkV>,DH){[ےV}P*RӔy9\vb8A^ 0K3R*s5lå:ŨɖboPkvEf`mZE+X5\&:ןz~0ѯW*{gоDE4J)Stl*r,hrɱ(:<5:JMl l-h@90h1S, UEI,l>"B [3qvaLoߞ'O@7׏>pJ7?~Ow7'lSwK }_G SdrZJJ$4"h&hJJ*)J6&{pPl7 '0 !,XrVlP '6,Kq>7ZpNu[z_w3`߹ܤ_?үcFr6|vU"gE.J Tr$deǃB c3 g3Q􆧍=~̽hzzYd y%>qBV(P!tֳ5 zb+]Qiby*\iP M;d8nNl9.^F|}a""Š/\(Xbgƒu쵾JAY_B :+h*&ԛ@Qk 8']Fx <;Kb&g,IC-<mNPit RK//5z3ʌHϪt\ۉWtjNkimsex20'5 <1tFw_gNC<ěN[0G-W/ŗ'7J&V'AΩXShAPD|v2I(h5lj`"1l\^hܷ?󚍂{ 9o*Ix]l˴B[y&'Ys|J,f ^WW`*ݘ< D/=ȶmfXqj?=}O٨JټkU{htnڑ~]*X(łA|eẝo`ιiO/cx>H?-x2oѭ {t@cH62fmрleVT||݌Ȋ#Q*m[m-,&m؎;'#TJd  C>zڐͤLpcvA"f?j&Ύmu~6} .%/$w㭕tnֿ+[ja oK 4ZIyנ! W^` Mv@B-Ȑ|i ]|1+Oт=Ҝ3JD ⼄l l $EVT p`g![qxҐX1jO,9J3.ɳ!u'QQIzNLTDAb?~~k:ִ"]*PHi!1oQhQj0dЕDbԹlIbHmi hET{6g}KJKj% OC-ʩ?؛ǫ [(Ta_>ce~d{*y"Y+Պ3Y_JLҳ3PSB ZKxLq|=~d#ﭠJmcIbX#.g~ wIvxg]E4|KiX<4_ū!=M"u7!N )&n??,i)Y@.Y믻&6#+..cQ!xa4ZW=|8%4 ,ǯW$OT9`~li`xq3?kwݿw~o|>?]^x]'`/b8Σbn%^톣XmynQƚz2'P{ԍhF2&|xR(2e'GV=se838M*^WliJ+{CZұ"dJ3_yP,b5j7&zxEc N?}xOϏӧ?|ą?}{uB30O6M$Hyou/׺/w5ZZ8 Y¨~l fomXzY_|?? gu.w/NE)bsq M"p6XU=-j]!fWZ]2kbI"Pqt*F&n>H|Ug+T5yYኺ" WǕ2n0 k/HN%@FCXo<(Ϝ_QiI3̄荳ѣǴLe%cEySǀ)A/KRި{lJVRD: ȜFE,<,!Ӂ?!95IAVPBwBm >FՕA[= Hou iKa <2VL@3\>t%;o.|Z^Hy# lTv[pa,  L'm˵)i3*r%2FfE JN:˱⁦3 .\Fԗd4 :kx2)Y'A9e0J!_I˂v/NKR PzU~tgOrV!2&kori9+X'R@DlzUU}D>p](>)/.1"š\F"QG{L*NzQ7-63!?\ :5 P G X4-= ûCZ{w;zwh۾;pQc&{T_+<@:I^Lj^%c*e7uv=[p9w,a#c፮ŻLǵpIQhSdZe:'1BLs H_k%9޺zSds+ȥv\{n~_O%L~;Z4M._Ox0ʦ`mZѻW7O6?'_ !B.PT#%+/{1kmԼ'=B%![~ukҠ]kWFC({!zGWRP+m * Bȹfкedz7Q|FFuzւ\BeJ xC*-c"fw/﮺\6%-V|NGBZ=#ݕo6#:G:G9i-灁&'~ 1jfB:dOC>?UѴsqv3c5'ҴHƩ5/.*Y3Z\]2W'g#q٩>#T u׼d Tr;t}C> G贏:)-f%)&-P=܀\KsUGe @EeH1-Kq.P "9+saˎ9<.Tgŷg棝{Iz=vRy m%s>?]CnT{jAÛ.״[['Kg좏=JӒFcuh@Ve|<`>GJwX. Eg1{5$ Nq8]F)+syq';Kd*wC7iq|?8 jӀT3>>>y̾5|'>P7 RzwDnBG(~,nBCG({tC(7揖hY` U T(`!oOƄ?WsŐutd~|" PAѺJUK<'|m; 7_КRQQ([ "U1ziSw黻;,sD)\jXthd)!̢4FG T49F3-eN`UIh"Qq#8PfQ `@k X.XwlBmgk[9-Q^c{z!"|[LRbz}Ud=Up(~1X3x+͖2 f5<:q]ٸKJꐒZR58w;A<BFeg8RtƸLf06RLG$=Ft\q$52&N] eޭ5I[<̡@~u+@m{UT}U/govnA .UI!g_9Ԫ@t1їA=H Eu > #C!f73Z#8$&'޳93$t%I[5c gU9oBa0|!%p[U'R1QJQ:"XI"[ȅXtJd}:}yֺE۰K AyT tO2VRMT\>CYæְ5l:d n\r""O&hNp& Q_b c0HTLJ.%z1H18$.2dg.s.{ҡx[[#@HB6?;x<~;4@WǮ }9Y_Of4N~ӗ:sb/^‡7=9m)];ȽY_Tbo^'b6'X+zyZq㻽**o>l4Ɛsٻ6,Wzr&XdOll 2PWcTHʶME6%n=uꜯν(m ZzOT\$eN\~+=gi<+)f2RBrb2B!D; cƠS"'EZFY̽Zʳ`ak)ꖱPuXS,\ =Md5IӋkz7c?鏛'zCi~;\>1bS{IMe`H,#Q]! 'S.Dxk!60ceRyCP#ڤ|Z|L+vw[:[Mq6]@(Zn>ݗե=Sz?da{VP4#ʡ#\%XI%pJGA6#6 nJ>ytL~n{~}ytO=F',jJUS@)؜lBL{X&JU s<ǵ@$SBU+rL&4$Zz,Z#ՁHO-Y3%p/FO7զOz3u&k0 ȏW}\s;esǼW[''D"yLεCmYD q"yϬ-E0!N!r0t TRH lj" X))')hȑrTA҇*h&k|}P89ܠ9,Yַݱ5r8V!j4W%0FM_Q=<Ϗk軜x\\\pK >4C9apTEsL&c8j!';‰lJ K:ixE蠧z:ҐNjZ(#Pe39&xkF[+p:" TbQ&A*8T@ڀhH-u5# H+a]V-dv] !+C(mˮ&pk}"[җ,M?/mtڄ+!#5[r gBCB,EARb'WE>X^Θ@ OF@;k2t^joM;s=-׮^Xu Jy4ZL"0m빒Sh]dFTF!FB9XmwN>W;mH"m}*?<%;VCF 2w6J .p(̥V6/jdIŨ5pX0kn5sQUH#‰C"ଶq. pwAvkY c{㟘FJB"S%n1#tzFZ dCò,=^;:@ 7D> U9Tt>JLuyQ%x?(}z{Tid'ѼZ3%#SJ3, F+J h(Qr%@|D +C!AC(uw$X\!FВG Uh{,Fz C1:pQR: |5=ִ 1cOϘᨇv!f gQ. B0 jxn Bp_ԩ #u +H8A[[%'(b}7(4gX \_ njS]ajaCk.;Gk݅CΞ W׸W\[UPw.g7#Y~^y]]zrQrQ9l-. {ӹvnEj3_^|rqiIƑ@i8 'qEfy`Xa`żDo]&_Xӻ2rl]@V:Ρ8<@pLj_zL46C~PneX|wןzcnfq* ?Zd"̾d]VVk̎\=0B4?e~*ѨCl4_h1]1N[FZ&|NOzWgT9L$sE L}q Ta')0MPVD|Xeط_4^)n.Ƈ)hƂ3,<N.+©lXD;uFS{GfHXV>;lso6=ϷGtηZvVϓo.@L KɝσAy~ײ ,oMVPu&jed 从_iGmw7]uބ;LkT,*>2 ɸ@u&e Ѡh@*DG)PE?}jErDnzoqyn}*.U3KʻNrO O^<HmU@" jᕏ)Pig+fYU 䴳sh~q:5E"NR /s 7$O>5rz ǣpk+_pNM1=ٮ,g,hs˲Ds'lbfR1@臣W ~ᐊ _M u-/~O^SÏ~5lGQ܎\p~}V[jge"Q̭7$NP.*$HYpPY.37_oe,NMj ?U. * o]::O'y>-ŒxmVP{L?r ӇW~ 0[bzB~gr={/ySLz]$99.hN*z98Xz!l6CBϞpGOE k ; $Rr] kƆTtg*i.VmYztaJFkº}M׮MkZEZ5E;\ІHo$сƳGU>G[ QV^ƻEMO߮R\,ìE/kC"qEm w-ACh/Ӿ*~{\2o M+]<ksn ኍ 70|&P ;ӴƻB*RIjCRVD:%ږK"*>-u봭MFÝҐެG jE`UAc : o=uy%,X2:w<39ktJ=-ks|XOu|0׾5 /Sk ԃђͱӎI;Rݙv('+?-?ӵŔJM$*GqfKI S[4"tQ=rDEaE3A֕,X>1!lJ|ݴE(E-6,umQsۛ/ZK71XZURxTDdT$h󱋐UwFB58 V޾~hhhxu/agI]rV距}E> ]1~߻¿/PhxË#޹2J$F (̊hX"sNEnc7*A KH!)p]09.?$U1\s0=R TE5 WH` `,[lJhW_!\ ER$'^}&5VPpܷ&5\= \IF'T>+7oYlVbIU%K#)e.T1Hƾc_/` 0}ƾ ฐG x3;U\fvbv#Uʡwb_f'68_d~]eStf?5L~T3 C@ՉKM0O ė ė H]Zcմ Y1*sKb1{ |Jk IG$X#)He !#[UT~\3qqQhH]8͡}-Y3\gklJ|MS{O ozzL)84mKyF o26vX{z6K4O9h=KH^R)#|hNV {9Y^/n.V&AgQ~L[&%<&):v6EK"BBz5g͙e&7FZgO3{-YGR&D(T6 HA׾#@YT, IDMk$CHB@ɼM1I"*H%L"80U"ӥqt8GqdI\o)y;G\ JK#j/ +Ֆ0\p.;?8X=?%{vjyU%0kKbFJ|ըts2`-Ӽ6^|ޭwJcm$I.Atij1JOc乏|zp$F$Yst>:DIRo x 4^d4:Jpy=CVʶIH`HHG >jDHB9 L<(BrD 3Ƽ8yg~]ݴtnvv0LյIEdI_:Sdq>+d35AgI$J%Dm,XIk0bP^M;W STmktämP{ 9KG4ߚb2%mvY!PdJ3qJ-e84}(&:3rKbU2@ف6JE3lh(m!np) 6IIJQaPa $ 2A$=` hXRlW+jƆdcl(„"4bxV.Q58MEP$jdQJ֋dcjP{׶n޸_۪=1J@Ft3Oi]ثXo79#4AxCN>U(H2t)D!1 iOLոw0&M3SQy茿Z##fݹ>Dʹ&)TWִ/rXZh<.A6d貍3`Mm$*^7EUEJE8iP(%EKN*tX `]`1i;+ m QA&*Z*jo?O%"J134ZZ#c3qﲗ 6ӌS04B?bb͟5s2]Ot=&_P(YO7J8:i蘮Ť]ZQmu4TH OܐW|awOXP&rɇI֭9x5r(L;NE#ݧàJJO"g[מЙm^[uZ M) #HL$4E'=c e*$đ1?$&EbI3ncXEQ0@/ e q77,Y1?O.\Je[JGxEJ|͌I$c,%ȩ2-MYD%%6ςU7j< & $X6Y4dQ /(@lV$vc۞қ]"2.Ǒz}'8rLnPzdJZ0R9f$alے|tDaeOup{~j <[̾uی;,}+XRR0$lL-@udhe|` Fiؐkwa>C2)ֆh\fgd B! 78 _]jcġCB=&xXmdzlėZx5M6m{}BuY="RuL eN1K%-+H)[=ҧ"iRՌ׋pF%Qu6^@힉Y2~zf6 n"сwnܐtfYM:6SgjzDj`|gkr&vL!;mB_#<_nfnI_7A +[ŤM /tVeC$ FH JC,dЈ'.yj5lP3쮀F:Lb, 0䤈d̳f'0:l%:$i/y&W50n5nG `[^8xz 9J깬6ZDn}xS zv bb fQ]tt F- vrrӉ=cK|G>3Z5Ox| ><ƶz*O"> !T(vh\m:NTBsF[(\lk0iz 9KiyXAPkeeEGɲ jݰ8Ga`zټLRKXw6ql={NkP=Έ!Z-@ZDZ&k]"e*L_5{d4|Ɣ< dJ_A_#=@p}Q)%]V[]j&&4` M1x) d6촭;"BJXq/,Zb{mc=FQHDYH={I ]%ZMYE!2T/ofi\^ji:ƐT'i!& o &$kUQum "x]L<,q4tJ;U54]-v".œ+-7g_ N(dojD2_]}8[_&>כ1,~1ֿ>r^YŞ )aO =gƿs)dmݨ;N\/89_^3~9mmNd0Y%ղ$Df[_,nxB&﫨U캊/vvR:Y]|7gsXfxfo3 >~2R\QwmHgwq ofp3sI00Q$$籋Wn)lĖf]c[._74XmykE<=J՜kcj[y~k/|5hx}M gt-Oq ^4s̗](BV||Ņ/'w9!]=){z *FA q<1 g0b^'DϾ9|?o[y7{dsAv5V!β!.d$w,}6Wg7C~P<{ḹ< d??z~|ρ}q g`=. OH`lo~ۋf|G/~u׆AEL[tFN~װ1W~krk\Ȍ1ps_dϗ aV~2l ,nAQVkgCK~_߿5h!vS_fo}hc\;ct][9qM-S$C3.< en'1EY[u,kϕ֨*CҹY?&˿"6.|8i(ZqMipFrK$Ɗp*1p"h;F},E_ǸxQb4)9 Jx,MN&H4DzՌ&bE\& U uTaxbG[+㭖xVkt>a⭤iͶVﭿ@LhKOˑ%"U6YQkG]NBeVF BH+jˣˌ.0Q($eqy@A%(=%p}=Cԓ=r*{Gr۷е\/ڪI<@9IʯqȗH Vqi0V |Ls`zE UJ7 o{4/ƭFaMr\B)ZP 2pC).ko@$&ϟ߼W b_|y/Q' _pgWӸ`q~pXji"Q)VvC墦L^.z\j9k,M'g֮/t"/"⢋FNBۦmF'^f͈ז:Z2/WMAʯx/jquМ̯Yg/?jB aomo:uw= 7>OKRQ0DOoW[!ahߎqE٢OEy)Z" 9ɕIbYRm r=5{`*NVܐvYsbZFZ޵Kbֽ]fucJXhY˴  XHE)~Θ ,x4 F8Pq4+;ջ:_sSm]ƛ/EF^R\&Ye qah4 C$z2a&ZƏ0Trߌa(oSpu'b Ww"Qpu7jxZ\;jWZ&D*O2WHU!\Q\NqZ`Wޙ X\mg+om./8쌰7͋ɼip6ڏ=(=8c0#/}F+N`Mt&N3;Lg*a4QS~탙7ܘv7\ާ8QeMG[b2Jo?_ F/A*s-/#^R5l`nxy^<[-W|_o=;BqyvJj&؜*r2jc_U37~*N`d*ĩUVԼGWpqBpS'W\v2LWH%(U_MyJk4sɪr2ggMgͯʆ+>Y>a~v XBi*c=RFM\y vn a)v-#%5pIzɩ\8j|RQ-8KŠTt|Z5U烈tӨ@,D3 ThxO*L[R!t4},A#! q-K`2V.o&]^rM'wNzR?>'DN؜~]iV>`ScUM`e/t6粞réKV|4TנȊj*nu" :TU׽8^j#+E}L;Brq(Dx+G$%/u.*ePϘ@3cc IʘA&$G F)QJ*4K ^M:&EM令uir}kժEUWrTY׊_{T# PȢ-/mj>x wJ4Ҋ3*\a蕡oEӡŃsWxU+:qtbe*:^zJuB/Hd_2RJ R+S3+Z} |Q`J x3=w{GyхnnqnV5bH'k悷T"DB0*Klr0T®b(}Dxµ ӉFMY?chΛk?P6ҪM>`NI~ = T@+0RV1U9b I07.}Vq :CA Ky 2Bʥlr"`s 1h.( yRN h39U\$B u8 MxV;V:鉶6Sj(F-,mun 5gOU\Tj:,k>'WHQ/QqS<84o #RH{j ="2/lC  \.@R8cD$ %"R$@"egdRQxb҇U"&禛3e)ʝ@썍ЧD",gHH"G52|տV: i7I+e6'7r1 6@e8k "@Ϩ1˽ F0c:K $ÅuG)_=:-;`wD@4HD"Q* VJb  ! LWk ( e:F b^E`k!<0R ھC֐J8OʈW!6IK{!mϵ[KmM,$t@ Drg=Q4%Apv!U1V6 FPfPZ?놬ogz vH(euKS hs,"aoʋ!aN'60kez(-q?M.;)ė£})5Je!V $"8%v$tx.C3+۽= ^'+a@Yѥcc*V:ΥC."3 m,QQ|ŗ|F3Ma; d2J0'MWphRN1\Z2N9&_5M%xֵa-yxv'JlOdf;?]/&N煳`%sdqeuz9(ǖ껱3~O/Lo6;ajI-1U͐P$V63O,`!< F0btGwmv:G:G~=괓Z]W΁f!LHj?t#_1K#|~xjn~TMQ~+og3r珟ޝ}.駳w0Qg<='u#0.Ձ_ҝ!wOٸiMqUMC{Ӧb4= *dﷻBEbq= dJd\vF.E+N&9v/N;IIl3?}`r*+wF|bhTj"Vƿ#~w @NXe~qtAX񨎴(M<-oe1t_fKTޏ1(QE ǩ0zx 7?[!2K@~VȻsHRWfto[I$(!j 4xM= .(+ʤ'giJ *u/mr¼XX(1p9@Ē;$vVDQH8QDdxLye: UVaVaGmoUx|eսNx%2`ox+-oug'oUD X&H IÚQ`H!*⒃E !I%Fj*5)FE"] X E'ՠۚ5vOo0)8G7\jOA*l:'G&ؽ0,)RA24ѭT]C])Y)ɬ,s=j1/`lt@ ʐ =a(R l>["7Ut_.*uXEML'TKQ25 T]W`G{I[6k  b*"jYd`N `qǒBf\Elbr \=kFklJ?aѺfH[3+rcPk܏9Paũ'`6d}~3iv. pPyrP#'•f:Pt/? ]et= owC󢝞 Ds% EL R+ XxΊ q{Zvp\ZL kp8c&>_-[Gy-SpA ˛ilܛslop~gO O^V~xr緢n鴁DMfeYSxg|=J{ d~DQ,,YaԘn<:vR Dr03Rb@oGni+fK⯹▱U81b`bҹVT^XT[tk#7'^IX^ fۥvY%$*8ʝ-P1qV{1g^+j$~* 1eQ3Gy| _]_l*6d~Z}vszK2 u,[|}}]eԗsG׿魈Hؗpo7WunL*hduY{ ԧK}VyĂ0|~5fJTܒTVvB|o5=iT*|otTLHuS0ϛ9Ι &O~r͍ovl?>UTڛ~ya?U&t{q;}{l"Ry#Qz3ҜvрK9lZfz^D wH*{ͥL>[ qǍf"P9l߼&UQ#)6gF\T6D|٪Ut"!j|OjBO1F: lp6HF_jRz\Tƒ>+T糂쾃`jEEg;]F;xdm%=IZ2i^.`L!{'D5&4QscuPRRA8KFX/ʰ7(V" {-X2|*, J$K=(T9 G">4e){'тFQ4`pHKR $Ck\>~'!8s ]t66.Ez|Q>3g1}j1J0\h@* צ?@ ΔF2Fq']1,4hM^qJD"U<2f`Z@QAsgт P8z萩:ETUp֯rw$0/)Jj,YNTrlC\qŽ/ v03xn2M5Zr[;v'vYړUQk0BiǜIg/Apj$7*Aզ|6ᬗ3:]A{$c/q!Ÿ:g+t0vhvW87W՝]ks_nt'`pR/%Z5&6%RBKUS/%MS/ipYnY=h?8JbCsKFD¿Z]nC2pR ZG+G!ƊPt7GashU[jKp3"اE:)K)Ls$8zeprBV}]>n -gƜX7hb]UYcs殚躳rѲîhI ˽y"\vrZwvP ڲȮ8Us;\|R'([(eAM?~?>Ɇqm?L99Vb\ުwoooJ8#wJ-: rFmL/?)-g&&9&8L0A7/Rڢ{cmS,9S" =|WXRIF73&c&+;e cޞh/9v3|;˝R 3|IV*&yOYޤR<xDdo9"rB[k\7_¨ĺ9=e1 r]{P9ڲî8äQ]̰h Jpic% {7eW]$^+$( MaW)QugW hݻ]I bWi,ecU#]ZeUBϽJvI( hJpecQZ]9@0k+dWk%U1*)*egW %-nؕp!D(wv$R$={V=KS vTˮz,fA w4X7]% ՝]%+St49;C%INlh-4FM*`Fæ=PNh%;N(o+dӔjy3 2Z<<=tI9& $jWɦ -uUJ&]LrJifH4]%1UUB)tˮ^!RQĮ0%>t4RkdWct']uiIQ֡.f8Jd 7 I.L/gEZk*#$ZQ*s mn<2SƳ iΆn7 !4wCg|HyBFC w?8o(t"/=8҅0˽p"ZTY+ T;bq13Q3 YuN¢EwL1猧> hU-Ź>z8SVǛۛ%UXxہs֌ \y$wY/B{ Eaٻr+F?el;d0A^ Sx5GԒl70dKn֡.vǐ(M."Z>kíNryD.oxF=6ƺ,.?EmL.߇x/Y^ ]=wWvߵS,r̲\|0KtAn Ѷ={оXCW@}a}}HA>cK19*. T<ueMj- g&gç_@Qlǖݚl-].CKZM'Oi9_ƦXp"¯n4/k)*Юxv)%L)etF;y׊HV z1+~?$އ;yN<7o^k܎VWsgwBƣ%kN:pwӋN>r}d ]csR?Eƨ\x!!z AVܩ!&26p@AK"&Ȓy'˔gK ld:0}frjΠBc%eRԇgR$Gf4=΍x춱 ۑDqT롕?ӻYL{}m*OX)2.JZ,U,yd*}08Ugpp2Z;ot|Tֻ 0؜<CV-Eů6γf }# (ƅFiqhJRV0HTM%#w;.ygyGupS{Ohs2c\!`"G 0cURFtLXpʽvwfsvsEѿRĨ j:ho7ܔq/^3q|ϧK<‹v.;:;CYj8W@R. `c&rhF"6v#G%G< H uN=3G2Gy }"Ɛ#02 b&e!܁+l+#Zy3#dA%+j&GchMi㘠vOMf?5o*{ι6SdY),j3%,rYMEwa臗C]kV3g7?{BDaDmc6[ FAF:;3LB`G}F}0(L1}dOX+.>LRv~A`,ij{jɰBfKU(eR6~ӷ7C@F65+Ar4vۗ~`} u>ii)WF"QJ tQDs:gP2hcpcZ7I+9 Z}B 6YoV޿ ɼۖ#{gzr&T}ILfƒa4Uq'lMR526iՠ,2ƻ~~ӏ\~j&ud h,(Aģ&'>gst;r>Aݬ W D4ew'w: 4_䣣gB٥,BJ?oF}*FZL"6&*бՅBF_j e ]Y! Õ9$㼰K3ԕ|JZenE .HTW` с xfhm# j\W3d LM|/ z ~\揞wk]Z:,چ+Qdbl2*Ѵvjڤ§źf ^u\rxITmKfVz{ԝɎJ_lV=ޛmT|_}Wkݕϥ{ԼTrx(Q7׷xwkA=Wσ qV'v EEw-r()'~3thct/yno~&\ono ۓ\vT&F*kt `DҺS^]v!!!&r &Tk RsAK QļBE"v.{m}d_'^6>Bb`mpޠ7U^/B&-)M:fUDZ0eŤu HfgN#hمWLI"MÅJHlP9'j׋}yd''+KNyޏZy5o6䧺{WHs] {A2|VJa8jiܲ V`M\[c̈\x 7(f'GYz<-<=IX͜XTj#P,tXxT,\9͙>f!=7'ϓ0Ovg91b>rr32r@z%@4^:0Nu.c,0UWhE 3/)!h2v̺\7HsHckkq8K.楠vq(jʨ:|8=!H!pXJ:hf$MnQ%B#Rj9{eA]@=T AD251y6h"QH$ di[[~ bq ":DqÈIG& u]kh'wwA5sD+UÑD=u 9 i#7NJ pYH ``eIOy\ PtZ˜= %zH8Y(.i(9Ee\;\6fANz |6K $@{$*Fkwx \<6:CYaxx(_JMשϤ~$ה~1|,i_7H !֍4Ȝa|V•N;h.9 ƇC-O)8i|ܘg'q}P_Py.8(.@pdU+ C@r]vI; yrQDօ LYŘ|&: k[Fg*{ S9Fx]@hp{ZY0EwFz7xnh%¤5gGnOTfȃ&*St)p b)F#V!1*2x 0C BaC gie1r!V(B 3BAr66U3 "3%,uC8Y4%\JΔ049Dnh!%S g/ªG9!zl\ㄘ|T!k*;feliqC{B0$'A\%URbR{eL0Xbʳxp<ly6l`.rD˸ >6m"r&y O>˸}_d.g=dőw4}Le &"uR'rs(|7׍nobÜݗ  2Cg.od%Tz_1'(@2N`6#\4'H鳓$gA`Z0cyyYMF(n>{ׇzFr;\ϛѷۛ07'7pCor>'7LDH4)t8~.zj=} Vt8t8~*)\<FE??_U''6UJN_Ru6%떍ћ0SYZsۛ=ݜUkGwi/^۟:N}Bgj- ->6hu0V40uWB>eSUY?ʿGkm#9e`,nGC8/ !ZjI*6~3$ER|IMmΰgD$ ߲A߾{J<. ݨ./jHYEt#{I&9N WߎI:?@=jIѫmO|f'؛&H #iw7bਿEQew0^Ӣ)ۀ^A#|ORqwyDEgw?@n,n{ W޿Q.ëz-Zn0>a Z yM YSZU VTK+#+na~u">oI##E|y𩄑X^f!{kI wB6ȕB;*r2Ȥ\2 F{H\T5u18qHAxgKPuH;m7k '|7ԉ|JN(hř .F?]5?ڬ\X.Y5&7q\HNsT@ue4g(6Dӎy5x N :B|2t9/!|'W10i-yy9L6T8@`Eyh[ǚqϡc&op<2(% cjy.G\tT@a#ٴ3>&^\?BS"Z&AE 4\Ud9 Mm}rz Ky3 \oN fPe6u%^iM#r!yŜ2Yw qD *ٖNe9:׊sÑ woCGеo8B.Tr'd 'j'Su~aL|Ã4v,Ղd4K8WKG%Z\(lv9n,׌59G̘Fo+uo^>8gMťܩRs~-ws;e(BV~΂:?pdrP,I= eݰif1Bիe`7bގ'zpg3\18>2 lR"+MzdO)H^?䦯(RxƕzOafj7'~nT"t.ޝ}.ͻg?}xO !pw&;`ϧ?lܵ崭q܈ -z4.`ƵZ[+09jFc7U|\u;uBOU,+Eh:"X<0B,?kFIEu2" f:=r+uY4TLyPЍ g4y̖TK$sEh@1}z-s> ?'P{A-fzWD׭EKIHIyS@V0sA9.g 꼩* 6>P\6)ϼa:'ftr.YUHV`G{J*WcQ0p/JX^_d`A VHK%FaC'^{|K7-@[+bcL@3xAE&^Lx&W-j߳3Vſlq#b"gW".n0r3x>zMWնKO { gmr]WEļ~:uV]:H J-lJ Jo¿;vMFͳU>O*&W$Rzo7Ƥ62[^ũ (9^.%_[a:i4黕wVB>>Ks)qbl=aƎ;PyD̎R*|%(nj!&YU}RpK(1ILUa w]}B 5_}CGQ-8Wd\褅&kn%ZG%PXbPGDvqc!TG`Ծ^8_͟..!uݍ5ӷ п{dA}<*o$r;of\\}z7Ւ`L-sonn7ݖN 7 Ė*}E AU됪xeֿ=&R>w@M.Ȥ) D֕9!${@M:[zbU|XUGXCD"zr؀&ã9DQ\ܝф"9KrVlrFjL0OE* ňBg}N1 d`Y00t=734xϿXNUWP=uao>s >fLcA>^2+).L-STKkt1JS/ Q"ej{YnFw>Ԣ՛0{k?jP4'.|U5uR44o]$2!JQV ( Z2Gf"T|ߌO͍p51߽Tl[nz?^͵5"n׬!_I>I wEmԽEʞes鏓1;WìJ<7D-NJf /#γ$U ^L;9fSަ̴;}{+u$:Us.W YS')[˄$OAI=h"V\J8.p $s3mmT%PHZcLH{V:/j3.ʁnX^m?[X^E;'4oɷf]s!x0e>)+Ѡ^ƑuU棲Fz!RҖ:oRa%P+L|k04e3.e^B%!:J!*(QD="h%TIn7?!u y9|_,G6` zTSPoiǥXj#f9RNd< D‹&o~oؼEqaYRVP9#ALc3oo?n<)XֽX \ɣK~__+ %ɗ*YzK{Xj9ex 4x)!XCSN-(N>:[1t.g$>Y:w7 6A#NڤI~Qw Zhb^^wqWyۀcN&[ ]ǒx~,x^zޏ%9-Zc)蠬rf\\3U .F"E L2A[u~t  ouyo@Y"am)ݰ8s!_-Y Kv勼0__}]B9 yDc4Q&N#TvQGdk!bر+?؍F,_TT}Ap]kSʈy:5%.Zy cי\a-ɛ87~a1 Q'o< -OCuΏqʧ5V-׉\TJp+GE@戄.+Kϯq&[s==( [J=hS=$'C 4m)xLJAӲ[L0!xIĄ rT*Eȷ Dyb2{kWWǛRz-cw-~Yw{Nz?&+RQ}㫟1 'L1P Q3Fy>;wW5l(k&gQ-Y}ٌ'R-a&oCʽ_=$DXGALlG8*ȩق m$a<"ttK<^h7s&x)T[l`9j@` DB9Xyp,d?_o,H ZbO9>{G1‘ۜOr.ԔIC$IiP)PGbTkXXWEV(!8da!%aַTgn^;>h* RE*CS)[MT1dT@SN#ʝcq] CVob}'~ty{k+磥19Q8AMmttuz񞞼:9DdM/G@$G re|20 {c?[hӋk6fR{x^ɋx?o=mYZ>9i~qrUlUlڲ>j{OlJBA4s8Yô^a0 ģ:.|f/l70hHۉ Sb>qy㶁-.6WPHZ f-UzT!?&V-8\Ych/-2Tt jݳ1u. B-AZ}ptWF۬t6/@kR. t7}Dud?mn~ٛ0Źz\ny= '_lj$Ao?Fx'yx~DC,L"wdt=R %𢚎iuF]2hkj:䏚bWAF$/;u0baYZJ9ڛ }|]6y *6?&1O?<҆޹**I02rC6v\w )ESE*R%<\9<DR]ITrU|;ǧVHN :*H"$gP^[p'wBs;` Qua.6A'ϰ g`33;YyQlp+'U %/ Ih`7l+QsiPbƍC 8QȤը-EI(]38[/h hϐ}W+L|o?\*}ܷ5Bm-Dz2"\rTx e4瘳 #Zƭ0Ap ɠQg#(rH/:~P$7]c3v4^O\ϴLkS cZx!5Rd4BT\f B=lO> K5B!6?2c@bUOM۠#sB<1 ,ZQ>p/%   ]T)>մ !1 '!a\H( һ %A-E +:T#ǰx E~6[? dq!qum"sAZfoQ˞l9CmԌ)0'Oa/eb\|m0_gwM.'Mv tcDn񶷦L8!Y䊜k9&x7')2˓8UlQ$@J>DTr1=&nb p nHߟ.YƢ8&f.vd3yxӻ4;|[3^͉tzyuQ)E$W:[Sߴ45tqܒr|v+J՘#ʚ{׳OuϞ^4\#8[BKAb풡Qmuf0\l  BfҤu&!7t6 k:2AB q8y,kh8ovezҙ6kZDTZ:~8: Ѹ\O^vwj6 q`q.o^gg/^9̜ٛ/⟴0c{pg|S[NKMͧFl1/(f݀B/˾_q5"sCsh<Yj,d9 0Jo5s'= y7aAƏ&_!v %9Ҟ^iUx\"0>$+Һ̖ũjRtPnYcI8MnuRNq#*` ꨀ D% 꺩*^Vm&6)ǜa:K1,U@58+ >%;Uwt4BI;?T2u0ҋ,`4B.KhRƎq'"Zo܍p13>9PS"IqsiqZ} rTsW B;U sY$vO66.w٥#Xסy0LD1|ꂱFB& 睵Ax'j*=6{f>כ q@ <`p)x#X#Ozo&\~l> [M!QB!Qmk{o-[= Zt Zf"nmhkQ$'J._ 4ohX_ *Qƒc*Tl4ڪ?|'nµurGZGZօ\q3 *J8pA8(YqjT Ŭ ::Pg$:J5E}SL0ǃR/$bB8\P'yn~HU  @' [7U -{c>SO=lxxsB^lmg۴C+]zw* Wӕq!`xZȂ#1!5#9e,G`oL^(D.D+=ɡjPxgCJGba (g}t<ܾxWlpNQ+f4 Du0B\թ:WROS ;H`EAQFFgd:rW;vxP5p/{kO$kE8bET|4)8 _&ᩳk jQKe#f@bp1]E韖14"قn][.'Ң'kw%qpIW/N:z*HჅtt=˛y6@GwQl嫱 Ygg'a:V޶D< `-#DwFO^Zs͝%>NMExm#Ğryprgdg6~P}JsjmUyݙ?kɋmv=g_hF=2r?{϶Ƒ #b lYK Wp(V oȡD ivWTU׽zk??NӶ˅`ls,"gRbr叫tN{Y]xmPԃZG+G!k"E,Q51Zw]gv:ON:\>gWyjn|*XAn3!WJ0#\BDVJ|HekDVD|ξy oH!kZJB72rOko[q.5VO[U!OZ{703eզhzU]{ԻN!%*ޮ];ET?5_A֧hfz;< S l"{Dh}YӒ?d =OYy. ĸ(+ۈ<&O(\x4QTd^&u2Iں\VG ܤ?ʽL-y"dQl&*^1O:h"v "=}Y|5Y٫!a7_xw]uV lv±V ʹ\L^,ZJ}{cƉQ"h5q9_|*k0Yv=RLV\I  ޫ{ߗ<vXSB$=;]~ϻ84wyχ\gEb^lΝՑ'WWG4}|EΠ`!eP3Dh`MNΥAB.Z .ΥM.mk]OJS54\ޔܬhlHG4:%u놺GǙT⾘[_\R=&őaJp0JGVߖ/ \Z~0lXlsuv:C@#O6P=j;z0Lcn[Ϙ.BUS$t`7,4S);*Dց+*xRm`(DeK찰`6za՘qPs~d.N+//%S P؟ݷ;\o #mw{èS£cFdjxaz7>5({-X1 Cr|nǠ![ OWq? ٧]Uj&m^.x) sq"zh%&8ü3>#Sh!/]jF&Sa9&8uF1YLQN)n.Q@؄ A2 @*5(K)µKz1Ϭ-HJg懆ײ/.X9E̅CN2+-3j4r/uA " ,P:k;Cy wD@48y" 3À0+!Ayv${*={u@⥉ul7>AF*AwpX @*`q1EпjziB"HY$ QC>f%bʝG5bXT A^l;Bj1W͠Y?ld]eU2I>6"rSY130 k5& JP ߽N6"e=uׂG=P&:/cX2S`)"ctq% S/)0ۋ0ylS  ̊.%5;X Iփ `9.K2aQN.R_/;%ބllJFRJNuoz;-TvhPqvK_5M%xIkBr!JUleǑkigue10%58_]WgKl Ej_esf8?A~jZIƕ_m2qZ3_2Ba8*yTPF?g],6GU ڼlM6s@Er=>?&.M}WYFoI>45W5YX=\| o|ǟߥ߿ͻ˷?|uE ?@xҚⶖfKSvXz庥bw;DdJ?d}~3? g~ SFo/NIIls3?JVŪWE)֟@D"N㧇:\(!Z 8C7o6M|{'2[QF +I3yg7Pec,YT`#JϕR JMk4OLq+__RCH:oQ)BՂ$hz8\PVI1!N<#TS:2yI=ģHuK84YyDA##(FQQ0SrP8N*:5+:l5No)`kG(wȜY+r+0wi$ '׼ec5Q"PRH-{m!LDɣV105KBJK˗cn&CQ{0rHRIHM'ԠS&qZ+n'e5p3kP]4ð`R4p)|85R{J RgSZH@*R8sΞ^PcӡdF9M|01r2 +c*(C1Aa(T l_#>ڹ. `eVѸdB*"Sª#EDM7{,ռsB7]uaIc@(e9!.hZyRK qɃ r K $Ta ] X#m͝Aⅼ"Z7^ b42melQ^Ms-c1+#ڰ @y,![nu '1beP]5vcw9'gF7">I[Mnؽ0M ZGAJ1qV{t3W NVF(=5Ce:$ۇ밈* |mt ufS.Op'PP\dK}x7kx3}6Dd$p6b7Mz f# in|'=C$aKfz6Iާ#;0!جTW[2[|j.OxͼFK#7d40d5a-dS}uބ۪fDȞ]o{l4rصZ'&~]"jTeͭ_H7Wc kD-ez8_tNU]m $ ,b諭ןj)Y%)CaLU%-a X̞a)Uh'X)sIĤY 4l"j,"摲 fC>eL_11XjHڦldu8W&GXc-І讆ZHY[``S1ա6^tT(X#(Y\lnM2L--ӛqCOMOv^hdzaa{ ?8*os:R-Zo2['gbBɲ]l͹ƨZР\@OI.9csQP-"c#Zz#c7q iƮX:cpXxm7gy.?gze替<`ll@彥 #&${wEiqJ"זbʩWV^ZauMd+82e}UrjX4b)G50[wU= I"zh3X\Tb]K,cinTYؤJ"i팇َ Rߺ8LOsix(M?vDDP}quGDܠpV 1g,BGho?sׁ6eg1mMz$yJ%l%]…ێKI&Yս;ug;"~z[QGe!:iɮqF\q.X,ŹJ*P #B^#.iǮxHv=@XQiw%ُϓ0Lniꎲ$( ̇02Y_Հ¼^㬃Vއ[3 _Totop񺎡Y̗(f/ TψJẻRu`.]t9۠v/Yg/dzp-c,L"3 XɪF'*\Lwpj 6`y4rBd|qBmK9 {D="Ga r)g≵79C l(V%LM:bf&tPLREKXa@j˵gHఀˊr!T;G0sςi ir@%i@&+Ǖ^bU.r$χ1kO|nNOoo,4, 9%UtdS kΠjuׂSY=/ WI_*j `HUMwJ7q[~ &n+rءb+FfLR+?IM`fҺӠ=IM}&UpҺۖUqR%ckmGL!8n RfŅ!Y8cׂI;M'=YŢRtIZVY!誐dMP a}.mNo!.S`hw3K<nv$>=Mv?7h5uq39ҽyGʫ[>NG#R۲qp9*: J[yUɣs4:Gń6XQ< YֵUΪ2N m :j*ǜV}r&[ʐ["97oMK5CUD0TrB]AHtgs$8=?<:Eg7t=8\8xpF/<}~yMfsLM⬋biiUv| ngRTx{{!fu5 Ұ![G0x*s͋rnoG~L0oDך_YJ[;b`IH8RSIÔfqJZ:РV#q'BWI|)7RNlaڰx*~7*x5uhU+ xwH PoaBwwûgݗB4{o#ujd44ɻT 9ߝx_#{&gNx*t6~yx6) {cX3>cX Jxzb1u,}bg,}Gk_Cuϒ\?ey?ˇ(ɯE3iŵ$_|υ^F*4哯#W,H~;;Ch듥vSW V7N+e+~>?N7=0s~C^~ioex-Qf?1o|N,qO1{2|VW3j@(|7Euhm'CIǨecHϱs, <ϱsяlAt<9M\s4%MZ=Ienrz W&\ذ?!2cX-a i07R=N׃K׃J׃HSX Npse9)4kb#aQ1BMD%@u;ؖDSS.:H15C1x BTٮu&=' A ;f'픏'ktj 5``ETmS6dt+rVl,αזa0Hw5Bz*N9_[r4yT띃K<~DуKE)UH&U 'VĸUy4b)G50[C_{E Јg$,:a5ĺhXG(y#,lR%vnlL܀P~숈"|#"Ak0AT3^h'2%p(:IMzlپؒpbWْ gbQmӥ$LZ`zw%vDȣae\%"vEqqŻ4c?SJ *YBu*V 86[ -{A\<<!uC aWYa:|5xnLXk# nL~dmdza-S03b~:$=Bxf*T.Ǥ3q'Gძ.']@ z#=Y Up1i0=@ Vpg 5\shy>SB.f }<5Y:DڒT**G\³v].+NʅPs$i iX^@ߟK ~uqwixwJ;'SUԉŕiy%i fH9GM*GSS8JL(DhRF`x` Ge0 bY#HDTV cmI$]P~u89I6'_RSi>lk߯)"H-ʦ\~~V@z$% 6]FKp32GL)FKYn,s$k;=`klz{9|oS$ywT{J-w"xTa>Ň|0O2hN:L-6,lXD Htb<0p|O8ϗyA RʧR3XjN'ZI FY ike$' IqWKɘ#aAs9Ǔz#,JN@!x;- sǀ Ț9Q}kTgX?}. 4цM@=H(j[Bô%y &nt xRƥDqM(" IQ\XBcJ"`p)"bI~Cl|݅v!j7=[4za&{JvF/A7hFUlL&?tcru;ݗ9] dt}X۹WNSA(jVl=07 =E - uG>Y*"l&צkErU+2]: M5 @sTD|`KuSGk6uĵ;J·=4ҔE ,L"ZEcMe|tZ 7שR\ZnqԤTվ{'Ew)a2DnS jR[  RJ5oXfV0 툷1P MTӞ]3T,x0kSmXN֏]MmJtOJb \LypF/3n^jt\.[2Ff4 uS _9 X7^F yOy;Ia)M @Q 1#2k% t*dp4o]s#bۮ\x /)J)~)`V?x:xt!zpjRʖd_ǫ^zcvg,# ;47zrQsUAVwmZʒK & ¼&BP5; jUTsc6x*HF6x5'y|txC A[MZ'ОTQa"U)'`yUt_&7 IcʋPF !X crBGAmNG:!$t Drg=Q4eEpp!KHq ~=9X~ Y΄?Y\,ST󕩅HdK0aR|Q"f{%b . fY7Lߔ͔WG{-)YV $"8UN!\IA)cFpc{M't^ fE29X)$IV `.Klqu(2 LV]d$*>Db piPJ#q[!`<0Yqʑ /Ds|-̩nU͓[~޼._/煳0J|׽֖껵3?Oƥ)f sPH5n!P$G7@>G VAptGwcvo5s{<*AGNiԦr4u.”Fʠ:@C,YqgNTNQu-י7^~U͋\b._~p|+0N o{^КⶆCSŶz0z]L-ˮq%"S6yE\rV0eP6 'Yn[j]^ js#VqU!]n|B -̂cfכxYfKTޏ1(aEibӛǩ0|k ~t1md *0w͕R JM5ftS? g_REH:oQ)BՂ$hzX\PVI1!N<#sR>ud }#t! GC I,jgEEIDGwAZ?]Br uT.N.n*CFf7WKNW+d[1;՝M(;';8i 1,280'M@S%qܱ$W@.sN4vD46 =ɣhfn[!mͬA'scbxfL|p0us/J.fgW4%IoIYJ>[?˾辟_. 棙S6Ya򥎟X>4Vgӝ C0 ;2tnSxȹDK%V.#p|͸e[jvmZDV~Q9 3%lJL3 ~ SV8-qвm>/[uVNKz W':x h°do~?~<Ui}'մ 0tqnF:YtQ_M1s$ -NX]mEheHQt[[Ƣc\Wj*y,![nu '1bePg+Z3ٮɭ#.vYǝSHe`k΃/BSϘ8P=h:FY^1 GZ+IS5y,rKVqy_w./WJ&mC' gC]:A>C-ɲůaK.+M[_ŻhpI՜>=Wcjth7n<9\TIL˝Y8f(=gϨPt}?! ew^Iu!˴1=I{f<,UjfWbCYb8>JMa?p-N$6P7%dQ,YkDiypfE%ٝA)$&p|Y8tVQhtQln[m"ϖS!äMTn^Y]byyfU%˝0ݨ۬I Q|/`/?fUɝl/Y٩Nؘ'aC7B+lg:HUQP hp:08?Z6rF8j\ 8)G?- `567Z2 !N$@q|;z?Wh?w?gX%Dp6# 쬴Y3#=W)?թuZG9DŽRn`,<<J#REw9lVJHH!mi N}D|+I)O\?\yq!ᇪS>;6Ka!G3EMn9%.Z9Kqt<"Rx(?}nѸ\ ڨޅ!9$mMN:v1xJEjCru^^=,g1kWpZ]zkbZLJawsE΅^}[Y?ܬ׸Ps:im.%˜(N稌;WXj^s;揤pb}0oQ5W,q_U85i# IQN)93:]R&$˓Ey."'&⼞h1F: lp68 <}K@oW+ڎh}튊}7<:͹vh$i=ň 9krFqL'㬙G﬙T:kw \fIBΗצAO6j;#M4貧-sҴF/-)I-( G8iڃ)˗ˁ}1}51KBM/R|L}r}d..h~;~F0mzx@ǵUqJR:c:)|2e#"{IFkt/y$`3d>Dʹ6FIr4Y[rLA\w7z؜> ӌ{-~V 3{^YԮ\i^}8.U\zgJ|юؑzK9yhw=_5&S. 0$rC`\oɪQ#G%G\LN uN=3#KC& D\fr,n˻٥&{VlSRTqi>q9̈́]^@㜣I3Uzf@9YՌ!63FizѰd zכ`b靖Gm77(m@} bfכ{0pZ+=ֶ+3|NĞ2u8}_ӷ:gž;|'[紟-(Ag~pW`NY+"ԻspAqa#B$!E,3DLh"8HPY`oYe5g&ĉ^Km.941 $+|X&4^nu0<훴0հ89[tw2w~VmxZ8dY:oڞ_:"vRE6'd'PaQMf)bVO7r %OyXk椲\T de zh0FLT$LbY:Y]&0 ;l 3n=|xR=ć-p@o$PS0uUĠ% fR{t Cf.[)@!h2 |%(/nw&@=BMf2hB@]hD{ݙ~4OY?r.y|Dh6%S|.x<ꛧ˷o}2mZY⵲x,^+⵲xmQ.-Z L:ST3L:ST3L:STSUL5mruBtڨŶ-7e F/ jk˛r'VNkErN*YGO/uhJa5 e}UQPVAk ZV(jZƎBF'|׾'s5J"9Y+Dbj[~ZiˡkZiVׯkuZ]VׯkuZ]V LL՘My7uyPAXΌPtIXGbuAjc$̖ۤrps^SCd*2܀%;9Tgk`ƈN:;<(UQ Ɛsr4(B[GoC PAn&9=3 `b zR(!U:fyt)ӉC$y-9둱WzƦXh*cnXU,<Œ$}LOI]<$J`< :7 ? _?sIKVZZ-#w SAHHtq"%3,sT#f(bE΄\5i`2jnΤ혱DH9̂6:nvzٍm_r#eF%pzXB1,, {XQ[N=WTvS$Vٿc^Tq|[uvMRͣ4{MZ4ݱ Pi71ܽ"Oˊ/xvߩ;U 5fǦOw>Y=nIܖVi] (Y2$!MSMi 0ܓ*uycJ%ai6QBm27D4N*k0BuZu$B5 h`6q򍷓oٶ۾;J8^ɶѽDLB>vv[GgmH % FG@pp%${XǷa eq- )VW=CR$ird ؊8zwuU1v𧳴2lF͞Vm:x8iSzU{z=Ʋ*ث['TDz< _I.f'9eJO_떂d2R 8u]ZKN\s*2-OAr{`A$pWk'g"|| ouD$w:59^t)QZ䐭+o ѩ`uWA*+1SB(Hxh*阄r;}w -C=s:?|)ڳ`H&JRGhxsU)#H%Sm &mYI·&JI\C2^mFW$f%Ԁ7^IB/:}PI"P%2FyE,u $Q2@0X$ @g $ xBb*`@oNJ8>"}v11~lf݆=ȽSYGoT yQdLD)%ST>iő'7pTMJ5mH¡ LL2)sJ d)H(2Lԉ##HKigBϠ?;Oύ qap"$7 ^ylC-ˆ͕y&Vra:!{]y0CiL ̞4L\kLO2x]8d'ܒ3PbLI\FKe8$] tOYB ʡ %rJ[e36sp !d4GI÷o Us3s0"E+}sp.T)_~HOO..O?%3Jӓ[4luKsK_IW7TC9`^WKk]S }]]`@a>=kq+-nEDjg8#< ƺf2s&PgFtN#fX^#45208=sp1gqraOWgYg욵kJ)g40_GPU9;'w jxwPM4l0lWapyF?|߽(|ݛg/^¾y7/{FZ U N@roV[O$kjo6 UO:23q,aBug`Y&xshk/N)bs8JVhh*LX ̈́s#:yC3odCftş| ynYي9Ur8+T Z,Qbf~}cu0e/~M #9XW1N~.!۩iAk ӀL\rV7ŏAL"xY2Nݧ$aݰo<{y"GNr"f,9&Gl xpIz9:xN {r6?L]YtI]cC'=Epouv|Sr1X,׵ׁQfp\>R9 ,yp'w {>:x 8RFIq%mv~1]D,0# w5'#sT`9'x tzfAe(xF_ol6fմa5OzfeR2K>ʸ(%2C TPI1̂'V1ZkPpG,&ȑSSJXyZ=`*D&ڛ?R0>""!&4J R" P>[cJa5n3[aJ#SdVX)ATqO |u;:L#Fa`32Jl[ "xсc.GdaG; a=ښG :rE=8<*_5hGu{Io;`N@.5\C`ʻ: %|M:Xk=,vk-Í]K6a`)HȬ2e@Ǔ.BL9sdZpk%F揯vu3Juǎ|ntc#}GE YVW^QI'ԜHT ɍ*op~qY`]|(y~(ܩp=K4ׇߓR^zO չ9'%oE/BMG2+mِℕV츀).k贪=#eL0L ['{~Zt~a:Ȝ{Ḣ, ]QRлX2Ggz=Jq!B SdY\\\3d `{! F&sߕ{gs$t$}[ؤ.ŭ>uEwrO=C.4Y]hb>):~bhqh\喬Lً kY]f[PV~R}j8:Djqu CnS^)_ܔ2VUiНl/ :Y%:1⪹>PjUug!ONƓieEdn!)/NsrjFJ5}QCEOZz"x|)Vټ)ǟT?J X%uUԅ<_`xeU_^/ҝX ґū/㴴7Mj|SW'Ջ LW.ggٖym)=+Ff[q )#!uH[}רJ}"do:k@9pZkmȵ6ZU'(8CF(s@ rB4V4)aJZz8[RKz8@"H.}2G٣ ݳk8[L ! E<]56yG7;}&]2|74n|7ݩ75cTnzztyՌNnl你MF]jK,N6!~6}&-q8}3˛Vb7劉htx:h~4ۥݖoa5~O]V}Oߜ4g˟mjaw'w>i#clS4\葳㒏TZ>8dS]6xO-|O>3q1N]ms|&y1E'ƌBT`5֥SK1xL9hNHѣ,sDgX`e4eQ#} T4qdAFJKkLe.rkUh=OG Ĉ&t>>0P;%XZcEV.RTg=(t-5 G/j_p4]1w?t2^YGǬ9f/%h5 8M>vNKYqV}2A3NK::$Cfu.ޅ-9bԧoJ0/o=l,Jh<B~V' ПB[+?ʃ?YLO0' |zwo]B坩S)~Ճʽ@_N^YIE.z/hqc<'mk |*XR6E ` xrbN~ge.o-Rb%z[j{6TXT+l̘֕ 5Zu)dy@8"Q/HBy?bh$QnAZ+5LMa3!(Ƣl(lJ)͉k+#*;ВĽ"#9utlüC%E4Vj+LI <*g^ ͉xA;̣! k* oIID&Pp3mUi=A͒7`>$h*%1lM.iZF/M;4Q! bJgl(/p.]]cHKfX \%M~!aSgs7>Uo[MgVi6` 3J'. 2JȇdHg b7V[/XH0FAP4A"&LĴyE,|R(T)pt~Xeq":t`2#&P܊`m@mKM,TG@''biVm)QВw2<}XA'k҂^49ІT&o%_GWqTFe.&C1!o%tMކE@L1n8ࠃ L >A\;f"nu@Ժhdr[F/V7>_ByjSť: ы%i6~.rP}TZD|@Uvdъ,.0ƳkJim}i}|^[-b* Y*B0\1VBΤĵ:.K[]8]I W"cr̜؎4fP,tv,|T,|g%1Mڼy6_bS7|$U=m4WA+#LR٠7+LDS\2ǪLݙɪ0 $BL( ؔ&}He+|ڎ9_ilM d]띩[sroḨͼ㡨>QSR$lbdUܒR R䐣U-G!qC fb5dD.G`^1CMK 1630~V3=DlDD"|;"x|-&3xd2=hkCJNSLhmBSp{>[vZkeaD|{Vp#pq=NK1.y^m3`A(Uey.HNN2-&Aɰ«.IkZ 6jaxv6җt(bg)4~n(t;$d}ڼ]e?Jy[2Wk63j-c<[~Z<2Lo-[2Lo-[2Lo-[2Lo-[2Lo-[2Lo-[2Lo-[2Lo-[2Lo-[2Lo-[<2Adg:4g:+_ }߮V;T]'oggŀDp2j[wu&e 6'ީ+D Ijy 8E+FkgꔢQZh6[vm!@vmfNbH/CPvǾMtkx;p2Wc>#-WǍ7$Jrx5K)iѳo>?;ws]J TJ&4J"F#ѩ0uMI( `BGGAw` r9]XsR):JfW9=t"X4{ u3Z6fE̱#~6KT7k0tvxΗt6` \,eOx)bZK"ȃD$&@J(0t`xruq;BOG?$]3əRL.S;^ *;&| yfUt(cUtiG&bdޕ Du/RZaq =lėZ8MAoLSø,(¤s̳b QLw)oeY-^Wv7jdj:HnOv1@\?˘nW 1`/b>܍?>J׻=~/ty#@tzVRB?7jhg\.UI)5j+ 3lɪyG$/'/.產u|9[\cNmRWד0Ӝɦ4ƌ,1Øy*o)5sg2>|2uWXK /7.t=;]m*|8 rsU-+b~~ Mئf>ptzw k,,|=uŐylۤnn?0Cܪ^ /RrE7,C1`oc';_ܲao'\a,ff!M~++4|ە@i^\\l_x\|ƅc#0fONdQfgqz:_.rY, JXlzO$ze^,r~-_.9^ć0G^Yx|/u8#SG^N|ѓH-ry1|I%c2-fʪW<ɯv1O|cP) L~q1LrE v䛾Zϑw}-*8+<䷗$leհGP7^hS5孅Ƿ:}?-܌bɟevkP;S)BLA(wO&?^fuEnlɥ^.1E 8M} T> G.لMmǢj㍣-1wWpۗT>I 8[LlgGvj'1nw{FMz8/{jv);mWo-C#wx3$g[+FRU*Rly&-J FFCڦ+b<;!Y۞Yi%rK m 3MQową9Q7=kv3W]]=7)*IiÙsjJ&3zic[K&GtXǎdJ|? #;k#n`'DERx^;3]奄yi#cnwd6yoU|ҪykGPO}[}_=|b\c;::|4W݁lql{qH򠝽 #/8n=h꣊v힨RГa?r\vx|rQ'"S B"TnKYcz9 KOqƶEւy*]9Yjd|nƓ>QmpxT/ +ȹs'`y >?\y8~\s*:ّ٥"q5lYd"[Ky3P '%()Hscf fr)'0__06;1}3AXra X; S^d:1] mQWwr12N_L)^gǧ!?Af+ySCRmFJ۠9R^H Qzk?E/Wߔ`:iV*jv-adԘlp%IyP&YWon?8U5хRa*)#R Wk]]ۿ6IKMqFkԙ6Ʌi ?6xI*skdKe Fr5j]^5X+eKٗ BGEׂf||x({=?Rs-ƆؽZEWM &ko o(iz!%]Mc2c˅7cv/ ¦ZڳrBB1o0pfyx/YR#kRWTĻ ߥ`#3d*C2.uZRy;L! Y-S{$E*xV~#Gc=RR:&ʚ21h䀸]~eb"ֆśJ HR^^%s!R).shJY.R !?neV ]0ZUՈs!0VR`HzօҜ w:7zz]_9f=S"qu$~Okc0-%)"FKdPKob̓ ՐR"D6Z/<$+08kp3^ ,R#@@ͪQc1tvҼR)9U#%CgTWHtncͬf.E,/RFKg%埣E YDžSӠ=ֆR},9mE˂@4 b'Bv}2_Q)1ؽ܀t@<1(`-aCO.687.+'u$7P|Շt%K2x./Vswژ[]x%&Y'еAjOƺFgzEZ8P;1 +`Q4dZ{)*y*PaxKUX;wVVV`NG9MZG`k}6jEk2grBdMhHpu4pyҲHg@\ )Mi_\2JV@ !mLvZDo[@B]C)ފP` TzJKE|B o-AȂA KJ9)كsb]n xk)&">@ȴUM;*pc5$80)8Й5 ,Qy ֑l:s*Av51h*x(.0͑F+I 6"w"=w/:-UhUAU ]Zx5{RP}ݸ`pzAyMs_WRQ8Rp⚤9O8'=*aMj1"k?Vb ihȀg} T/Vnz/UUYV̺HNO#!D-*h5b͞Dü LgtZ,_5lCezmw!H1]`2f`:pĥ8(}t*IfW" S=E@*2 ʓ]@N&5ZBF<@( V2a2Їh] #."h8 = %"A}ŐHk6ZGe@܎mhbf,X]N5~d}9'2|ΫLD6;8TxFwO 5jy'J ԉeS0d{#A#xAԑxhpC$*9W%u #ӋR"T2[{3ms_n9G"0Z],R+y z=| % ;Z{W@[~R cJordNc/΃d0I.(YK+Ycㅂ3L$f*9Xvx8K-y?x@"'pwЛF7Uo1äIaXJEwE tTWH! c\c*  L[ղB:i!4"XF#41 ݃ Z]^>UJ hߤtVe Y/^њMVKzr@]FBV^Rʸ`CnFb->ӊZ ̄ @GtqtKA$0:)-Gۖh MGŨ}CJ)kә@I#euhJfcO`? 0#w2$k8n Ȁ-a}\*9F m5$z"rLc[FAt0XRtsAzD'〔{PzBDQ|\y jӔsځ'7(s#Oț"+U 'GEP#124dv=L ?b@TXD t)cU+R0:cV-ioF9[Gj҃cM rӪ$I b[ db/pUQJ%'k[( V#Oz&e#K$|ܺ-ŢWqڃi#ǍU1O:jU$aeg+3 _tC$#߅S"p#D=8YO=8rѧä>K [Īݰ\ _N\ٱ0k&)̸ X]Xv꤀L-0 Y&2o DZxQ""B!- P V/-Z~{q9 b-\ e\ kKJ{3S"yq"J ȋȶ.~^A:lX~Lf7[n﫝kzy}t\ 6Jn2ϖ|_=ˏ28+ q ~8kGYʼnvy1XDvhe }ٜxg_,>j3-~k[q{{{qʮ^\Z$:7?Cbjr14^\ıJeq9-ra᪶T_@;_Ւ,e=s;"W]mUl3<ǸG虇̭bB-pjC lĒ $.H15{ TmlmMf=U6~d7u'!- =mD`a}y-%|M\gxtZ#rwȚzH3˴T77]gcKYH[ .Y_zƭ3I88m;jKkGb_z$f}eVLˁu \-{ok%Z+Z뵠qGl.2st>T޾/jGC(^ozcH4Ta@%V"_8?:賋6oy95W[5V/9ػz=iLz JOp?x77_ꛯ?|o7_~mқWoD z11} ?ʀa՟tz޺i=ފï{iw q#[+$~RzZd}h'^8fR=\Z%tU*0Ssm{p,?BMI'v*usycKZOQOx  0଼(/gcgYyyV^gYyyV^gYyyV^gYyyV^gYyyV^gYyyV^gYyyV^gYyyV^gYyyV^gYyyV^gYyyV^gk[˝[Ed_7*+=ӭ /ཹzwrY&$ s|slawI $ 3H<~U$EѶDY v_U׃'*k鬇D'Q4^PvDmlփBH*]Dr% uxhޭ(ɉYLV*SƄʴ01!Vp^J)]`JCm[Voꋫ*ф=Eep)15WS\sԠu_}]rTI\eIEb Hl Or^}wٸ!%Qr9~ض5ms7hzNjeWZVTVإ)`8zm`3הPm3s΁SU |c3 5߸e ]ǟeimŚ|h+n*]J\jFO*ŪZˉ X?qbA>DɿDWy'yz;.^Y4A;GBqR Ņ!//HyNбPˣ|s"}}}_839 nX=?)Py#f.WLu^CXWQzjʢm^{A72T&׌ztvnKGaF/QexD{Hh4֋Y˷!b^v#yEB KO~Ò#J7y%/4bj k}&NV H}n!w%ɗrY~"9m=z1f[\98uye #fU.Ұ-_=pjůD7ECvE[ +;(2W[m!'|=帊܌O9뵤iT ,j)d7^Պ\y).1>|:jyoӭA9 Z "V}IQ<;>ôaAL.Z3JJ#*< a m ϬfhƔC9 $ T#9y\/ӴN5UC*n o~I{3y@<C SG r^=shzL?ЯN&]tEJ&g>Z9ަpk] .ãzD+]ВϨ&5&i:i!4uJ\%YXq"<+Ntp=F $ˁ{z7^u(^XԹQ*I`Nyg$!ALehYm`^XxdV#iۨb$mklF ߑН98u#g_?1kfNt5Kvvq_ObXU'giu|!K_ l]>|U%B1onEx,i^*Khdfca}Y%K8TCRw^A~ҠSedƥk&dH c.m1-m^QEx9 h>2_{*NF.| /Pu<[^>oaxpuEV[ YLӆya.SkMT5e J cӂ32iȼTYКT3,iw 1g:+SgP2 & j T;EkxK[l+qxQ]=lluh^4iHo]n.ha$Xh/D'R"#Š#V3Ni2떌t[k^3v$r/Z-EԹTzؠϰ8֕g]px=Ki83" c Mt-nYtɇ\tO j;9Bh=ziz(Ř[ϤvAzBSPFtFLRk@nbE#tȈa@W_HL1# " ҶZ#g&PWY֪k̸;jhhq{cE]zuZ1Β+Aw=ly+Zz[aQLsHSHl郑3p(GM^_fon3 LCOB0c~Y>U=è5G憅ATjZ9시2E8P_b"M~NN¿w]*ٷ?ZbIptgvhmmQLr5-.|'iiep<[2z]"w/'?ٳX,iT$L6ѬWe:EV{)~ǥcy}N+R|P>KOhbib&8yֆˬv2 AdK4l2[s`IFYƤd[JIbSI Pг&Z#7-&6ItjS2۪z4_]/tJ1 NФG6&&\kZس:RNjmuosp)sR-,_?h2dx$6D8 g.V0 8i :HugXW1O&4J9Y,!>iA&Nˠ/(jрV^$J+cc|6SycpOyHHMM%JZM5dme(4hra}Zݦ=ka]bh9+ߜH踳$p~WGȈK2zYo>¾ tW_m4ϋX8`$^ C!JLn@˷7'wGt@7tH }Cju/JkKgxI:$_hF>ԍ0B^&8r:"~ƣЉ80i`X ҧ ?WW rf_F)hm4p5J{߂JPr-#vw/:d%U*ʻoj J1=R"\mZJlס"R!TB(N{$`MEoe_R[`QqT5g(0I\!ZF\!7@- dzpZsVU&n:C9gEY"`՞Fx)/ߤ:* .qB'.f,s-G 75TE6#PTvP6EAlW@a7 ɕH]W@edWP\Y [SQ DMK77Q,&(B?v>sZxE8Y5eT՝BSõK̈P.&VF{]-nl$z&w_}m t]eM Nu=#30//]Ϣ%[T4!Grvqr3(2Em?D)3\\sٿ >ʹ%Vkj.9+Σcӫ}a92=bx>BۚYt"Wki!*ʗo~޿Dly}!8jM,۸&;> GF? x!H r@jOKΉ\꓋vUV\F-;l7*EǮT jߥ< VF\!E\!v]\!kA\=qŤV!h4f64sKJ6mqk鑘ɯ+v#iVȮi biΤZw2ˏvǝkh(uCqCp[o%H9Ob,}cF ZYVVvmlt_њX#3{\ZIaR)!$Fٍ`k{#pu t]\!+i}2@Mou_R:zNA\=qRO ִ7 5# t@WQ\icKd7UA$א+v d]=Gqe, ? "7o@WJq ŕ_+ vJq;!f+x:7\z~j:ڍ`N,v"W;2F-;QXnTվK ̩鍸BrO h7j躸*eL탸zq%bQPRsb6 9ђi$ވi$ךidQٵAL?Mxà^bCI+¼J,8]"tU[ہ}gS::6:^罝xE[ʲoYQ9|-kd^r6l:q +zQaz-m~)~ ]s}W#]_Y{^EMU_ z3oߖ!Zm9?nԞb` BNjq:?s3{T&qc(MHBx`_ibs43[PsIļ"z ć冟c䗿WhE_nW./CPK1m~Y=F>h|ģ]o#7sWYoI6@hri+'K$w pWVu-'r!rf6NQjy|_K6RZφW;mtAF!)C[dV !.!BiOIt&D HъeAJ4cB9\6 T&FRiEЁHON !;Y_Mgõ=.{jnY&1ܥŏ&\yglxquc+,6C]z2݂͜"_`ꯋAq\`'c8ij Z0jRVY DBcoXqjkCC/!-(Rx,*Ub<3dH#ޫկy6Icy` H}TjYjcc㙕QHM1oM_2xC4*;lڰaW YZ*S~ ջGVӑa OJؖppFhcUfX(m4%%m[96{" Yۖ;,Zc:^2$f'! -޶mF-}>m;dO"Hߍ2ŤAI2P6r}d^ J1щz`!IH}(bqq,Eqg4YJ\LJf]|>`հ@v8%z*gh|)ZW_lR[zelWNn/_lzB%^%epE*ღX+Ή\Զr ET ho~Zkg I0?oe3h2Ͻw KPD)g1>py%,aW4ɜ^X. Ӗ8/D}bz?}y{2y8]8gכӴ@C$}ӽIL>kp3zeYK;ҞR DRU~jOU6,p;xdǟWF mŅT2CbQʨ{e- # %,%BUj2 SI$qՒARQ-8- !D; &80uh"!69kE'UŪZ[1q[f'dSI wT3z$lX|s ElJQ`I3Pg]CasZ6̘e(1ҁD HG4-#w$2P8ÌK_Q^EZgCsq]/@f>8ٰ|6w/+g]Nk-7y}P۽crQ~SU6Ԕ<&pٽE$n&D 6-!SF Iڒ3鼒4evV)g}vR!GAĭ(duR_qbXXld0  -23Xެ.0va;Yf4|~׾gFlPFI1jp+dyƕH+")q29MUDu,ql@ s2aI6<(jJhmRKeh$߳zttܝFqrϐ'OztRFvB>'[8SuDH4{σB( 6֓gb۝,uaUnwήǝZOZAB$⨗Hew%3m0"8Vh- p#“ B0y>#Yp)8 #(& ))'.NM|z؎Lq>/92y_UG\jiRwHw ^ۙ <T4,;_q )HRkXaDm8+*&,遧 LfCO'!)juwN|(# ͇(ԈF(tD2ΩD{)<[?N2 u$FWH#Ɇ$" (x*8$ Imhq6nUNo{PMjѡ%C&Cz)*m ~`GY/MUjX|0\B$L sSĐ(t*FˈJ27Q t"<7xAWhKN(蘒wNZPx 8p "j'-<ܓVwmמq!9&S? ˴5Lp~:2` NCjW:n{À:9vޓ`(g# d:.Z[|?POL2H؏#Yu>iIFv  L'M̼|rwsj,׊M/,-_cM?:O_yNB"_Ƀ[-``Ѣ8`辙"("}8:B:7R\q-lG~[m<00,^N8d8q:knwUdnbnsE˜+~.WbXo;  3/΋{^m m8E3'T 89;q_7dl<gQj7NJc ]LdtL~_~Q!ԑ0|}]ӳK; qO8ggo?0c7Nq]]_~ZfzEh97GR! |]෫a|NG+JTVjLD~/W`;ev_;nߞ?GǷ5ɜ1d9qs;RYv!cogo{Y@v"B exTN[Qn! z AKhulZ^з(Y 1٫W<0z=lpnS_eOV{ v*iToD*k}\RyUН6,;Ά[i|}2L14 e jYb۶!Q*v,6sӵt5uC/6X&1*fR ueX&-+oUkG6%VU- Ѭ^zp{yXݳ=slnfypѥѰ|yy:ikTitٚRq` 61җM9 FvzTcڵ _JŽpi09u)QHBHc89IĤհ{n(`'YvykXH0EzjE?IvC@9^?.;wtװoJ09G%X:2uX ,D0nv˨IoS9\0$Yp~=.xp#s*1%Xitֆ ' hJ hrWSV[w=V{^ƌqXjߋu$:}t1%yZw,YQ! % MJS0}R^"a跦ښVYH:z0>fyeQJ%Z +:U_H.M8;qL#9r<#?W~w ƩUZC }vkW ׅttU>W'RN4TE&.3 kqϘ,IP:؇ .T+)7 ~3\ ^ ҹ-kINde! L,,s,u ڤ,6o`@IU4Fկq0Uyl!>UD2%׳MW1񫫋WRe XvV 9cff_m~#6ܪRkq睦WfVoU_|{}qE!قX\/g|~v[novQ6:o>|p dٓ4Y$!ŲǰD-}̢O6U5Os] of=}/Sy\e*%<`H5R|e4xkL=M>PlX]ˏ 'o|-ۓ˻o߾?̜w®'`c,~ [x~m9m٣+QޑRw-Ota9D Ie U$KT`;Y&Nf%L I{a9jUKc2Q\6Vns ҈{/W}~⭬|խOՒ6 )qޫҫ"f)Snyo]Mbvs,yUyO1ioKThBRVpKD%˕)i6H J\3XEڂ:i$G ~"NOP`wRda18+}3jZJq ~қ\ '# Nj9w䐂To$qEah"Xf{(*HRڳRЭC{DCv/!d'7Lc #S$˖ny+$ʍUw5  " d,$˫DS 2T^XBL]#KhR~hS}NGa-u# 3w@8i8sai8<\rĉ_^;ĭX>%TIw~&Cx֗֔^/|LM¹9%:޽} ;|K蹪7mvqyRoi&@qg6%JG՟c\O;7ya(h|`4z+ ?~~;zhgv(5O ¾TSApU8se*}쎾FA&K}ytPÃ[Kj˯6 χ囙nc,fu\L V~]mE}YrgMۢfdx8P釢nkp]Jhk{L=^ ft=m5 }J3mMpRJ\Jby|ـ\$W&iI)hJ^D#5ЛgXغ}66F(F5n]lQ*SXƍJמ54Gt&rbnKI\$KLfAUY !#zj1!35˴҄+73`Cw6(cܗ(I~; \H_>}0b]"ޓE,!fW_}r1OK'-ǒ({rGnzZ^e`Jr5&UQŒ k&Hȁ j6hnhNɥ$1PRU ($Z)ϑJga18+LB^RluU]L٬-wnLCغmաꞨOgn=݅W򪺧?.g$(|JU|QyPz'3 7亃S=gr겾Fɫ]:Z>;}jx&p0+vgzNGüZDܭS37Ϳ탻҅~"Jo.ן]j4ntvA}D,Znm\3ʄi5Jwޕj@+li5d?>jU0A ]!\ՙ@K o;]!,L;tpCW֘tut̢mٯ hX op}5*/TJXk{q%QՇo',pгj=u9<݅ܕڣM7޾L}0JEQY/?]Uw՗#)\^ѣ8.Ym^I/XIn.NpGJZ~ߏA,0 |3]zş S|MqB,aXJuUw ; )Afu6x>\ΜQ ZԚZiӅ:-Vܰq+rNc{W\gUm-)mc=wvjb UMڦSOw+kdW в}CIuOWHW趗Ct%]!\nBWVѶԢC+i傯b!c1mKkqךr9m u\Y"@潦& "%?&/M1]՗FsodIid1% "nksz놐7~eKT uDL5aZZ*hr&܉VY~5/ @qϘ.a *VDk)&./o\xMcvb+}ĽfRr})EO%#ֺ1^[L7p#^6U&h.+7H/:"`:"\Kr!5mnv{ڭh!Tt1hM}JJ]`CCWt(U]"])Qt+l ]!\BWV>DI㞮td@NEStpUg|WV۶\YЕQh! ΄} !즚udc2B?]^_ZwZp_]rZ4+]ɞ6z%UCt❡+˙ ]!ZNWҲ4m*`%UuEI074`Vq!4+4hi;M#J!4롔=]"]YnRwmZ_v,}v/ poZQ%َSk$Ag9T#UvQmUBK/>-Ͳ䞟^^!F8*Sq9=בTȯP"?jkDW F+&@̫UBK/~2|k$4]`&im芧]Q]*|J(kҕ:M`Pm*=z}]%^:]%J7t *J`Z#Jk &6cW2".JJzt*浡ҺUB+UBg5ҕV: &>tp9u^3P򆮾R{~zux @ umOF@#!ëɡ/A5Nz?=<2>[qn/G㍂쳥d"C{ۦKz>HV-3p#XSu1C :n{'VM]r{p֓;Ox INDrss*x2(㏳KQa-JE~5"V+o 6b6x5Gxp QU]bCS(?ϲĝ #KjÛvmj@p\ڭ -mQg\Vmr܄OynSx$?^ꅌ+˜Rq:/nfÇ"Mr*z6Je!٠LfL )&)ݗoI΋8^qTҙ$?T/|--s4y~C$En.ް5 (=Ox }ӛV0gz$IVx,dAxRދ^722#.?ZKF0wRFʹJg~[8 1u+f GRb2[݈FW1$7`t4{D`x"+B\BaCJ¢51ZwRq%T8 ySSʤ 1%0lXUg8N@_(߯p#Ke6~!<Zķ4#3J|B+%BB+& O)Wig Kҝ Z"88i4ީzJii I7 N,%/gm37J+oڳS1{٠g! O3lj 1Eτ|2>OF ҒP-y"N@P'#}] a}5Ώ;p=:HXQ+hPKj -)҈R+UKHo9F@E"`"RSFD ca$D5#8] nY$;坆6j35}"sE QrpuFAc&T{ppL`"8b n2"+  ڸsջKWeSU45q:۽gvBoN3?\a0WxсNR?DwipuJZea"ѡ-m/o]Y u?]*x~nlS|g2{ԫE)!OUƸC dIRt㢾z ]0M5 K. !P<2&MG.~juGd4Vnu4D{S( z̝vhAKkƽWPf bu[=·&Lro3iX*Rano}LBEo;ٹmf2R1^=^@sF0ec .ڗ> %+--nVUp +/ Ʃke4M0EF3fX?pȲ~V>U>u>Y+6>*+R&`opYkt7) (_;I9DŽRn&x9i Ȣ`;X+R<$XdIepv,tI~:ίXF`́ܰh}SYҡOLբWg#s.d Avf1|%gS@7=z䎧ZOC%<Ϯnosc??I^%_ֈ;[ _O"{0boSa뾗X9zՎ/FU/|ԍ.-/ڕ6L3EEG hy/m3a8@094Wdk1mfeiY*V&NӸ8 %DpH[ZJZsZ虑QJ`TZ6k CgK }I#kik;,M#R0-n_yq/5i;2{DmX4gş%ݞog;.Ҝ?`b>e䢗=7?oV}sڅ`zE1 ~{r5e0*d 4s/i"e.;!|A3B2fD+'B(!#0e4 Qb`q٫v7P%mQs굣r(#*DCR2$x3roڠ@Xns&H ]QQ8H$*ŰNJ\'TgwC xa:1>n]LId) {ܩJ >lXݰZ4R)+ZO ]$ &z%\H*5+H0ŭ:)8VDU{M 3ZR9s=tW~NPJU~+?kMj \/mؓk!rF-"2SY/4Zs՞{G֜nՂ~ڏIBQb>KhޤCY8\ݘqJA %}US,FϺ{'[;`O!J%[SGf<ˠi b::cjI0E>pBذ_fE\xfR#L* 1nPxa@pEN§*,EAD2EÒףa쩔XQk U6 q3in^L&)Dku+n'nt3"K~_(b u`D0J=)4|LW c=e[{-^Kx *`/^t6z/Y'"*?oހb}E XAE|tl8"U@d&fGU8#8-Vnd 5,Bk\۰et\}i"}v&ZXBi^{6u}}qKM|:m/E,f"g Fg ls[L`&؇i\S@`'$~"H+F] os7@YnýIfҿ00;gO>~*>`t*i2Ե*? G˟Ck>V&!e";[,>{"+&U:gas1ɱ0@e<7i`VzU\φ߃p__0MJ=*͐ |yE4T~KaSb_⛽O70N^?LmX?^@&(]rŢ0k) ` -߮"#Y]b+nL8WՕȹz_krl^lM7wb􎉳2޴$WR/ GeOԭ~ﲛr H&%,E^l8/ef57~}c߾@Ht3xk6pOiT+E֐TZz|:&N}68\*MޖPݲ13 B]\h9^3~%L"ݥZ<0Huael3g$d0F*4; %\ؗg9Žq]O/7цs뜯wniaSA]R@-(!2(S^b5k`ZAfW=]lVKg ݷ1Btx?Â+aQbb^`-<{')7:)!(Q=u] b;)exeg/8SpU\Z4}W鐳by=~ZNh~h{,Ed`"k"QK`QE576Йh`∔)aosgGAw`s΄y" "I&qk^U|dhE:Zi{g {>@]Cb?{Z3ޏf;sj0:cQ ETXΤ N Fҩp>x`('khQ0!S&aHFe)A@ -X7`G9b]׽̧ ϊԅjyKldBƻO' ecFỚr`N1s6e8յ""(T31˽ F0c] kr7ZR{k MͼnD*#R"%1U 90`;$ p0@AP(@2q)Qw_z, wơ^}@⩉ul7>h%!kHa%seī` 0&? .ǮM%v9'Yb!ymԵ 8@&LbO!Xb 0OGH>qD"D! $SG XLsF k# 8(ҋ8>+kzb)˴&3|j! <+>'B 0NYާAf?3[t,o$ķG;ͮY$V` $"8!$ S3YlLF:-UKeBLTK  \ 2tgov<8KX2i |_ eqVꥹ,_D4Zc-:BG#b&`& kF5!&*ʝTQsv(joFI* 6^ԠNU} (B;N`w)n'e5pVU]8c`R4p)ƍ2nԞRTtٔ>G&^HE !gNHETA:J9JInd>;QXF  P 4lPR `:q'eUD*%dG(X[=&{Us4Z qA2yS Fw,)/kU&c9#i޺('û埻onwu'r*`uwǞ,RwUh1s-c1/mxc``R<`s-:2Xg Vdٷ(M'sN>J ZGJ1qV{t3W NVdh:/BcǕw}m~ G Qĸc,IpCF aO߽Tt?߸NB5Bt_WT'RU8AT=?41l]UB! <5iЗZ:Rjs*yjw1٩wzH{=E/`k U cJ#68R$#Z{(b1VNkGk@@ks4ֵ&h7*զwԴtaJwTRtYK\^WLFJQ9i9|F3*|nJjRԭÔ>FGEYHEʖFX/ʰ7Ь(V" {-X21, VÐVFixS59zHg ܷ,_6anۡ͢b~^٩nL&B /)d(N=cXh'&$KNc3 5&PfTet`E *(HG|tȴt8wSV=3iQcSd޳/8\9 6EiXbWtm>eI0leYMfU*f2/%a`^\1-χz* =UT4Wo3{C|0p)~o vh~j)́.JYZ%@(LWdڮ .$1&_mz-%]ܑy_~v;J8Ϙֶ]r$b2sjyh߼]VuownjDϹ)2g}hQK}C1یxؿ߄rJ󦫞Qe<{Ԡ9ʋY 2Y6u3ʏY285!yNz2yv;2D( Jd6*;!e:u<h9tJHWohBtVd5f*tUGi푮 ]Y% `'CWh2fQzt嘴]6z:f5n*t_(푮"]yflOG]ua2 hNW #] 7=]0L:\=-҄xgCWF=nzDD֊_WJ'eyz"Jkz<]m=so'DWVM:\S+2NW#]Aӄ 7v=]uBWמr(=[+}8TzNjFDBUƳeVR=}%UxTwƄ,zxte;{lN:\BW@:Js9 Lg&If*tَ2Б ]y罛]pPәIL:Zsꪣp7HWs]u&CWn7uw;ʽGdjtEzz<t4ІYc4Lf tEGzԄ~eOv*tZ>t(cGJtgCoX=\OnevDi?f9OEo1h3vC˸}@}g?/Ѵp.޻!O:C :bq{ۈ|y}!.yv{h2.W΂!, >[7mwѬX\2էG~_>b>;W)=/~wq&ܝMqv=۴f?"oFfͶo UuxlTT [wn /7_Wg|`0o>c?gh>G䷿:~n, ^y_G~_Q=IF;M_*YoJ*:MFJXVlhoyoO-$ >d}]ֿ^[!-oە}~z^7p9Z"e5[Wߒ{9BrI(*1Lb*X7\:/bbը#t!lEhq1s1XS`u66hrg?/4q36®6U*`uPC8+T"H6GFA<)َkEj|j-%J`bь\# Ŗb01H.i;]Mj(Z%ic]Eݤ2IPRRTt%UreGC$3im $ۀ1)JcYUZN1*늳@x ́{}&SqhVR![І(? khѡtԔRPa21M~ާ\1*1]ˡ΢VkkὢE < 4"A~&iuumL:AK[@!!X2(֧!jy >q.x@=29UScḴ6'B9UQJrV (E({'/P#ESca$ )V 9GYG(}52D H* ]1*2X %$ZbA_2"$dN1RmZ#mSq%ƒ`OdɌZxVEO}Z5er1h-،+!Bh%gE2VAFYR4JFuTX RQ]J0IP/ޅq*!^S ]BmQxJ݊ *9؆5uXqxcP9"PyUɖޕ8ɠ-] %"[M`sD\KQ٠+.t-#,ƺ8֭?SIsl W sAs4nZTRZJ*P:#^\i%ՂV[9wcm'zMQ"\5"+@:x0f؄WZ#˜`AV# S(UQfrʴS ǒb2f\Eoj(PB]!wl¨͐jPo]TP?A6 [@CР, LhIDm,d2%ys 5%@x`O^5 *)*?%9 0+C/ppR 3aź BA[* , pݬAPό L%Ji7xԞe@QP`T F&b.1Qv J2HO5.-d^>teHH/'Bi<( e`#d6 D۝BAjp7_&M2: A^kt[^LKUTW'cEeթܥ҉ϓQȿ {08` 󼸾Z_֜.zw[2Ĵ]`:FjAuPaP%@[Б_N.G[Kw20:xbgQ0Xq0_<*XX脸ajIR|"2L&jZaT^eΰOkEJzOyL\t辤#$h5w< om UtX:7PZUu U䝨ځne53D!v "}ӭX,9z;:|[=NȾ~6B@Fx2=\LA@A^s` QwK\Ha8#tKID)0ePExhMV I9hB;ڱ ,P<|txY+ jw,w惌%F,iZr-1H(YE~ݸa63*I@̔ȤJ!B2?A.jDao,[M~T}}L8KB [ۉ,Pk.'hQ{Y5j3˱ ojk*R[oEU^ pδAQF(@R y+8z`Rm0G_yNgy\׷iN/smw&Q#AC7H7iM63Q\CH#wPQ$(uZ*5fFHa#d%›na('_ 7*3B: 9RA5v# CkŹD;3jRXn2*TA3,0%R!J HOOkRnC?!~ܪͰt6Y es+FĥR 4b!4r?E^2`X0 - HܜEbps\:&X:W!hSQIǠ7 8ظMdnR<HFUjJ%=TG5d(I՜Z'еIpZC^4Pc&E%VA8ZAa …Qe-)ךE=peZ?MД nFA 45S`=GCXi(TtqHjM…PaŎĔ *5^_.CD!}N1)5PS8$CL.X,3vQ@Ǖ-* qv^jwk]1;.:0B [= vWy..VazNs6=6_q >_cJy))܃^*tOAk5uoڧ|/u]U]G˘^ۜ)Ν0wΗq| ~ov2fE7m3qvqjHv.|E8SM..~m"Ӣ/R1zOv_~;wrq7}X.N޿')o{4\\@jqu}Pdpzz|m>S]>)ڸnj*36[(?Wr'NWPn̤\Yp' hsm_!X `1,c LbuS$,;Aj^ԒHYZ":vX OuUz,?z0\IAG +B +B +B +B +B +B +B +B +B +B +B +zR 䳮hL΂|%\֙JB=pQ*\EWZ*W\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!p/w 2)Nw+Cv*W* bRB +B +B +B +B +B +B +B +B +B +B +B +bDYe;\1G:\p pU=J"pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\!pW\ \" FRWeFo)uM1&ðnor|7Bi@`m8uaJn:Ç$aLA>%a7^I6U>nfz߻ XUSʳ,hhR$¨0#r {wd q4e@DÂWLot̋pTͽ7s'7} ~h ˡ=sٛ®߿2?aן>VsW0z<-EMGce|rA[OFi]&h돪eBUu?ӇW>\nq(2 |כNj_YoTkL|F-i,AI 0ȼlJc%Q蜛Z92ٯJTJb+H-TzI:Gh 'Du"mw,2E.JsDiZh7Lɕ(EIg䪄lWD+U#WR0II 6F µVvE ]D(WJWAtHPwG \%:>=JrrrJK&U V;rUe+rU=~wUx%ʕP!*p)a]-g.W%ʆDz9reъwH `ug䪄+;sDk^ JKߎ\{V=;!J~dp .==ԽG&Wlb(WzZ^B;$W%`*he.W%JfQ^\1UcEhEEeIN[Ј |7G5RwH!`Dgd+DWdDtҠLD.kNV9O0̫@tz:'/8)Zeap0+DhܧmXP>]PuZVB RlMU;b$61 ۵r2m38Ζ:* B?Bu̓g.]y%Zq6D٘6^KLL@L*ՙ*3I{5`H`MדګDzIQ@tFKl]Zu]; 7?"m~ȫr-{~yUX|]a|1.WEΘJ"+.aҜ6^LQ:O3O4.b?MbNޔzݨeNFRܵK}] 7!bZ[ tlNsR{_~Q_ogi{7/I+%v7WsM`j^ͶBa6&~ߛlHOl!$X Qk5)xd\#P4v[>Q/h>Kl"}҂6*b"l6*Z驎9o`=p8K,[V+)HCgzZeB-}WߺZ} r?-hn,4]&v^>-61vm tg`5*ر*;^+sYfo9 em瓛y 0)U=5-;7BX*GTn:`/ ,^v.{/6;F(xkp,K8xFKӛzUY>Ѩ&'S<fkd\rQe}A'h-pb:LMV އgnˉgfym,4?Ꝺ%[9/ic%.xѫԻ2\5b*&R=?#ܑ`YZ~Z}O߂9J +W(0Gsn JVF4GhkI:h <{&t$e=5\ݐ5ٻ=>Ԫ9ZdBBL"0KmqkL:782)βֲ9j-#Dm7aƟP2@j6&Ƶm؈&=͜}Miju1Y,JC/f,-Rhh9QdVI  NvλroHB*M47.ww1gKpkÝ˙67ګMu/cl2#7N|J 96r,0Mݷu}z58fϗ(Jr̸$T t&&`M4:(QV]9V+W tfLwg]WGucsyO} 9LЫtW";Y{ Në\I;ucۙ^Y~y 殇]brZAOK*k{_x(#LiuT`p(Cr#4h,ZȴN*;*o3GMx5>Wl^xc ŕ)$ [˄MYR8[M'q.]֥HP6 J Sk +q5aem?,i-7UJrg,x59JzN wEsg{J: e h ځ25Eˉm}@?=^f1fh:ySZ%WaSr8]1jm)t2Q}S Wr 8e9DA_vG`?u|8vkR> ^<gyts6ߞ[=`Z}&Ec,{ د )z׏\{뮪_ͻrܬ*hg[_o7tcoWa^Æ{ {Ww7 q|YQ_1Pa^pP@Zsu){>3-X'i4Z:q]ybm"qYr Zgl#yކƄ7Es=LRL^:Ɖ3>0Z&_ttLVq˘G[ 뱘6)!(A{Fzgf{Ɯ|DF2ς+C)04R(%C"s҈(pO-1  AQ+I12.^D9DsI34gm721hM QVP`1;R׻hrZNlvwfV/h“Nsc\9IL1g=u 'REጠJ0 t1 O;3-"JNA֔'Ȝ F7wT[ld^2ͣU-IB{}s*thOo<;#p`#<)5(# h.kE :z0 YJcJDĻ9^uc;zZmb^n XAҁW;>l\|25qBZAb"\sĒ:%㕅AT*P)j2ſ,P fy_rwZAωI9ю/]ֳVw^ŕ-'֮NT&.?}]SiuwLݮiwmwqnj^zva0B'L8 dog~2Wk:4'? zIG^Ld4'ob"LcpyFq > sL v'N@B?>޽%Or˹X\x8Z;,^Opq1P[$E%U4HD!/A`w|yc_ݏo${mƒ-MP;e{|WFѮXwmm8mj8 yfjeSJj"nس5}%[7˔}'eA>ZP}|(kض#Ǥ'^Dq?/T%4~?O+sr6< m.R9";XG\ *IL$i':ys~W$4MLJ͡u^辸$V2! $ @k4$ xrJUV ➺[*v8M]ŊXaϢGSYhz^N$ QdLD4)% 9= Oơߚ^jk 5+2M,{OBx֧h9kx@d+@(\sYpdR݅tHؙN Y&Ic&1)RAZRnH1΄Rfl~K'Brۨ3ũEhy);4].)4.y.|-b\}Liu].+)U[Skɓ >YNN!c8K&N.ɩF) 'n2I~tw2&!x~F]6Ȓ i`,UPٕL\"K,ȍw4!͎O.J6Ǖ•Mצk\Vfffwj$D (Iث 9ݭY[J^qF[HrJ`fHO^ps}6{rb-pGӳJ{?K"b4||̍;J?cFlH֏tiqp0)E+.f.^Gw܏9<^o9L*Q7iMNC痋P.yyP< \LO 1,MNxΆA^;鷿}ǿs×_|.̗˧_?Ю'J%M$Hl,zVZCx󡥁=^z6]Ơ㶆0,,/I(!wHY^dS8#Lip2.2aj> q o7V!S*eʓa7"lwM?X5KPDf4!z#>$IFAd8L"xYfzC=sҴ_]uxYcSQLnD3ѫ׉aBGHx:|JQM#cM0N7=6͖2 τ򕢚tǣdJguҿCD?`UU*>D6>p](>)/ؘLo1"3\"E2u#[UTXn1Z( LfC"$+M P-($ G4XUQ֎ο5 Ի_X֠;pPaO.FiwMj5Q P9_W0' 7Cn 0xm>:e߳{왈,rӣ>/7q1=-Hvl 7I+9^1u;yR6Y7+.K/SV٠=ݗ*{UuT=d`?hla3e=.dIIʲxdHGn ݽOh{~:J;,6)$Ff4{F^{|M"1`R).Tr)J,6 T,pϭ1&dὦoQ3kW#g$t";T9 ŷפ)/ {U*v@˺ N{cW1_ILK|CDa@r{+Z]/T!@HzJcAO5ʾ5 pƩl05>HQqd [9kXw֝@:˓h ΄q!J-\LA@b ۠.NɥoM)e.} {E39rYQ{jR3Z]!teZٵBw O˳MOmC͕{t&u[ygOgg쎮g]_\^}xȭM.5m~5"C.&j٥u3m]V~7*;vmƻ;+{r;?\]\lo~uwmGx"5Gwt<â/}ҥ`ϛ Ppܻ_Re˭w%+,pdۜt~pv.H}S &W[IOP&5VmεL} j``n`3^@2O AdރL RxDT] `rmH*I/br""B*i pD̄ 9j:-Gߩ T |UT]rs=bVʻ~5p90\C VE£/3*D4*n ؂FgDO &c^DNDJGkhՊP[2V#gjJ5YX3,Խ,<,\9͹)$M/逞Mhy 7]| '߹ˉe(2HERW c.fuqUh`{!E5!jad1cb)MZbL̮HjqԶvn-Bމ@AR€L-n2UCRi-Ssrv0!3G.Z<,c1M0L-`麹V("=5,D/6qF0k]%I*#->+[eL;'KK`ࣱ&ϒ4#*xk#2e hؓ(+kyFWCH\hQlJі!ą3Z%E wޟeҝ^ VPrpezЀ8S)#&=FDW1"7֡ $|pYIn0dU1r#נ %9pSՃ)@xA$PTkH(˅E@$gd&2 8FΞ 'WޅC%Kof;kz nϦm_w{^0xZ>EU21T`ltX:)Gh.ha(E!`d9w忍_!SmWwE?$k'1^$o ~E\!u>{DDJͣf:~:pI3ٓALN&A GJk:Z185Q;;R&"t]kP>|/_y/YjNߦbsӚb.C-hzm]ϰ]\-tUr~tUP ++^o]„i ] cNW]FRƬlg >/DG޻t~I'Kg˻7r>4Y*z A!3@-x)Rt{Wa}̍=nTjH@9z07(tȕ\]t9YL&W|W,]u(1ozOɄ-8z_wfџJ;jŻ'{ȓFn# !jYBVlM!FfgP )d=-Um1o cnt+4ozֽC?G-tUꓧvg$Z-EtE U[yHh-;wtJI!lf 0>C^StEpjM$IAkT9,u&[DWn55{W8y몠++EtE歡׶>vtUPbGWB""'孉-hy8]ZttЕ3Z`/NW;Fxij7ⅭЪ1 J&IЕ܁dGW>zV@*ph ]:]A);]Be=V+=})hπb9ZVuk%Na7ƶ .2.h_jn7дJDTԌ/JJ V0pu!ǽ.z0-TxѢYeǴ֘<~R;\.He7Oj<ب%PX*ÝLI]ݕ ~:Rf rLԏYT^d1b`2CdAiW:<@4[ٔ9+ V^zK*KTNR`MXyۢ ߲oM(}l+4U>X QAzМhX.BAY80n\Bs)T.F& 9"@h Z=]u4sÌ7`N༷gDwV8+Ҟ;X4g4nAd->y7 =\LaES&\^r[siiLusvA5]4gϊuyˏܱaЪçbk.~TP8vOɰ{lGJtq.nX۳C؍ 0zi Z@'W4RJER*nu0<FPøwșg'T"uUPqA!WEb(YCI06øOhq;o<6=ݞ*5s ? L .†(H8t'cSa 7 YAdFk UL =)yAPiLhXQYQyq)yi6Yۺ)(g瓓kL+3 béĊU27WVR\]s.%G-=qV~Q⵲1dd, B[pLdBMj9=+1(,`:fyt)Dp{HZC؜6&ri uDžG{ϗW2~8[;pTn8 A;4(k5F$+=)CkU,dx=F̰520"#݆ș ^hS(M>W5Qg혱MhU)MÌ͌ǣb k7&mmڝē ;C@,%J%!RnrUIBj-RƒQsvJiBl&2CI! yA$M\ 'n"Ñ8>p>7!M acp6aԏt:0bc'#ku1IE&xFzeQJCWJ1K2.eIt Tt[:ƔFD6:Q#υg|`RIOyp018"ѣċe)u6&%hŎx3Ae.QCd3I PNd 3;^</ 6&!6̇Fi1wT\2daяJ[z[+K<5 =J+t+JU(<Č,-]+M'Vh[oDfpyѻB_SNa~X/ZFgu4s4䙼t\0g| i)\eѕ8nS!Rj[JQCAQ[F1e2V W\xE^;OΈ&2q.3۰-<ޅj2Z/JOn15bqrt-Y]J>.z tPS7x1 8OƠ|t(sI:k|^['Sէ)طΙ:Jڅ'U-NQ*)ʨ.p~2gLX4Lg Tk0B*봬\ek*%D%YS}jw4gL~ %U"9qU} DߗO: r|N\ram6޴+e|ןݿkB5 TGqq+%bs.A[>xTFKAN[1`kBr F5[)4\j .%En46w/L*&.eޓwŦk6gKVO_rV 9Hw~6ƭZ~ݯۚ3ަj9A]Xhr`)s$y:2(Qx B&VX42CR:73IB?"AWtBASw9ޖABJ\k)rIg:$*@D)BRB Ш6](4JvHʪ^ZYϢ)5=Ge yǓU Q)ELy4)%x'/"-Mϵ4mUn+KrM,kO{֧h5ƣ'r8DƗ:EjV鐱tdK1 {6r/m irm+ jdI'IJdKb,9ٻYr!w8LR<K%NP 2:Bb~Wj{O^>ϙ{AB)L*f aZ5u&'܁(4JHI:SX J)a]HzQt0y}ˬ=e<^t]j%x;djSfP()YOBLS7+߻9O^%Ӷ $bV%T4Ure*îsp nH7WyԟTy w私+f ,4C ]EH;rx%zlr5<[|DcՒTa^8lUKBsK߯,%F緬T9R,iڻSw/jFkny[5\N^d#_@+Ͻe킢Q_G/^_|s{RRn&M$ꙮMN#(fYޥ&T4jPF|6z|;g;y`qbޝӇg-rݬ֪|q10)+D F ],S4guA'U͝^":ᷟޝwoo_qs߽}p&)p!vם0jMzj+XSS|[L@ Xtλ[ 0*D~ՏoPofno/N)d;q4Z%LX@iCٍݝ׀Ep+bx#P\/vm\ê3Ip J/H:Dp,HGA$V^QI/L+(=aF]Hu|ȃGx Tƴ QzLZtg!РpR|yRJ'?㓷n:Vt?`tJy; (z7A~*4sQAB5KR9gՅ\򿕶EI; 6vn6-|0|)=ۙY֋ziMEzQ^"~&9I#Ldx-\q$:6 3MH=6}zLFKc6Kz\5)"JBQJgYF霙(^i)%5Z c֔Ы$憦t[cOޠ02;WxRt&ʁY~7}]YeݧͿO5Os%Rc"5INZzgu@8ia րYR{3iIBDUֈwn:[dzT1.iՋqUܩR)3Oa|\p HM$˞8 xPC'GbsyC~mm!w|[ @ >qZ.eמQ5,Gt&))LV7j \.EU f Ri`YcL{MEn:~cly &Kn`(JzIѮѲb1?mb캋>nq7槯.҄|7-Km̱,|Ғo[(b!Vy'MƷftkP(6.{).ڥ/ۖ:%{(%iӹٝÞ&ri]vgD y`&-*G胣S#RPycINH9+ nU̳ȴԑ)m3&iTѷ:r>Q͇7p8춝Z.\}Btպx~KiF%Mʧ2e%P0D&'k\oHc+W_% >s+k^xT'+2XLGQx'ηwzjn?|QO'799hkDHF blR(+)e&horoٝj*?heڣ% yfK\l7o7. OɰP)|M8YRiDqqR"ELVs8MOc=jd;{,{\PQz(YfRKL:RO|yX*d1xNr[fEGCkZ~!O'BNY+u0sa# 8.IH!Q35ḵ(V lD s]dǖyQmMGztkȭ2K%n'dWm(}F 酎=ktwJ.VUؓ(žbPY7WwD_rj1#W@rB1F/L-&jriRw43Oy)Git (Ja WQǢ,#IR YS͎Ȍ F9h`C2"GZ㬏I}t`) Xo3\ gLzT^h~8B/‡Tm|SpYjOkM5cbZ~@* B'嬌ᾤluZ9,n QOЌ?=e˺³O³O³Oc1!셐 5Up\ڔ9r' ISIz fǨ]҃ 'c~kƑ q/(|@Z՗`Q=@^iF@"mAhkžib{yD J` koZfHqZR᠋iB$mӍILY^\e{fYK\-Ku 2"ͳqe- \\Y +krpjR[ .g7P+OXvauЋ>{52 YY҄&ZK\.+asbny1vӝX)jRDw? 釿$~elBg'HD^llDό-oPʒMYՔ1c%f{ޮyGqq3/nEV]@n@g]7?x(gs)$ҩOUɭՅF .lх# /*q֭.;ԅJ=]*~d> WYL"V%P.HɴSȎ.Q#;0F*rka@yɁKK3 XL(P^p"Ʀ5kH1%_|d>u#e'L .&U!rf=YLRfSH-<<نb巘,$h.0F2ԄS4惚.FULJitWM\۸ʟIdirYȕsüV~kHќ%*QEU1d0Iir9){FQJhQ>ACi1s{jK,GғBɆcGY:<Ϟmc"{.J{An E>m52ħw$Khʖ[cw<@jˍȹ*ٰ0g슅*3 kYҌ[7'/\Lܼ\o.(`0u"Xׅ`{0G &+Pi"D" 6QzHL8cuDT΁%MFbu,wu\l,E!:qɮH3"pś8zfF#L∕8*(6Ȁ#Q)!.qǮx2! aQ6_** .XqՈ#ߠm껠ǿiZ< ww(~=@ψArC(2:SOmSmM>D\jBK/q lX 2sl+GᗋzQ C^ǠooT@^\wsc4E~diů>?N@*Y2)n28Bf3%"]@1\1g0 3"/ٸb^y{B2-EIq$(#)Qi32o,Yyd&S$uӒ*'%;Ŝ(arMFmlFq{݅J?M@wg[m[pO)J%Pէ>x_xAttԟa|ma)a"̤ FT bܠxp<=BO<5q<%J*Zb9ўXkBD;c$Z(@#$* ike$dA7H|z1cC<ZGF'/FX yAN1:2路DΕf"6"klRXEXUVȡ8$e6v'2ē/~ 81/֦`0S*DqM(4l4 +CPVh8 Ê@VgZBui}drGL"z{=^TzfXFʇqV,Mk.&&M(߇Nt膃ޟޯS7xk.mW~4՟7޹azއ'pu2GUi]ЬS3BUP{ыoQf}_<^*]YɰqBD$\M^j6N2y pi9 S˩~W|(VcN}$5d1ۙ/lޏS:jF?j-x?6ZhCׅyNP>RNJZ!Q4!+cZuwƝ5lYxKBa<@e .VMt3:oL:TrlE\M- giPF7-̤bTq6 EZGMΛA~sm ۼn0,Eܙ ?ޱMTb痵D88.z@ǶPlhܨhʐR[Bz7~M۾\L#waTG_:s3=?Fmyy*"KejjEZfpNxwLkay+dŖ3uuۉޅ{d>s4ۊ_ma'%1)Pǂ"s2%vXXIFP 4jƸa{EsmgrfO+jg2 i*x".Â+(1/0~I " NiJ ZX1txZn7v!& n^ -_1R.[W,>F +*6 LDJA%@P|eoLXGJX^ɺEh$|mg)h{/ʝ˫}_W&O.<.{L 6 0GM,GX5)(7#g;b0`ںM mg< $ $#`,5`"YHV^{%,V#=¼ٟ݀z/zr9ۣv{hW,e|]t'<6f.b 5Y^{며IH5C ES}L̊QN)Q0!S&aHFe)A@[n0Ў|zvZl.u'i%*Qpa'؅jq2Cd5QCzT0M;-ef3p1.é<`A)ƍXe.du0co f_u8Pl:^Thlp;"E RrS0(f0`@PF !X cұrPKV"P@e^5u oHԄI)6KLS@) A.ԁ.ɔtp(1S8Út"`9p}$kڥ[ˊ#B1qI>Ii1 3PC HRWqEI&#?{3Y&\w,Ծe7GP\1&kK{c 9RB-ѩbdh23o/fӳub%+lqn8蟞5sKܮ8ժ&:}yfB!6D#1yaH0aV}H)xհƨawVa5XLryzѕwzuTnum=+JQ'i!,d$ ́}]u`"yď.5 ~S;Unb)i^0=N VXe ӼI̓&D iƒųK%4w/=}Y&D)];8L"LE}·{L;`}rZeGv s?*% S2Jգ@Ջ9.zM0ҳoq1pԁi}m05¹M:t=f9LrZY8rmMjlXTZ>Wx^/jqhwwq5i/߽-f{`Wͨeєk첊x_Ͼ5\^ [g ;*2d֊ n<"1} LyAzBތ>R c^[GQhU$Ba(`H!r'UCJFj*5X?)E"'^qSd`k*!zƌƁZ 1b@_UL#,8ZI^A-c$۫P\6]v *g*p]Z4SCX6Lr}ݫ۫ojjHm/sINZ[J*Ik_ p& -&e z*NC.No4[7uS1Ij7o .7I|MaZ~+V?{Wܶ J_b܇IݭuTsZ)! RDe@bz=Oä(ڜ̵1O'aL 2[GFlp^K C!fH ^ wE%IV֗،/q }fcξٶHkOHK"Dw RRr:|9^N0*D/mgYΨ:(EHo[>y>qp>y@>y>֋2uJ4DsK&=R9Řaըok왜&9F7ga篊.&VK[B9ngV:{ Q>Nʂy_$8SP ^ŝt!zư`5y)1NMHejV<1f`F@QAsgтA p"u!t]WS<. q7 ~%N|=o?#.) ,dY*ߚ楜-5m 3@S6WHCs V.0"~[r7~8[@yr3kxbHnR+,,=G+G!"EX$,ZEzlI=g =!"} y1E4^JJ80n&K9 Q 5%9C/Љ%G[k1yT튈krT9ߙC7u#+ߝd!B`&OA"8R"M|%BX9#m+o޾j6EbG 2 ؙ͠|7M7bEFX_xF$j &4*<~ EۯqWOls^dAB.5!s9+k%b: IBy x - Ѻd'CXZR-Z୊VXc iL` F(G @8X$>}CiD)mc;) ;n//Nf;)BPok%b;Fс0=&8x2s"TȈT$N+<68`$hjk5E]UCL0םE%>{IJMgH0e%nYf\mb{6BJ%f_ȹőg&JၱkT 6ukV(:ivaS>{h'7r凝}vgi{=*?v{99;Y*-+jdJك/x "M٩x ɥN[Q- JzVjiW|'ğf a'Rqb֭m/Uk od~n.&2,^*/mdiLb 0~7( ۜ5%j=(h9[Iy:etFŚb¬ ^rÜ9Q [cI"mCvF uB,~ ʊ裥 *&wc?`5a:Dmtk do;F71z^ ֳRN##]z_]F}vO!ś`k ׊Oy~ًA<Ɋ/ E?WnT'"J蛡ڀp!1G70J@YA, 0;kHE>N)#l(hL]~DDmG[OYk^5=^K  b0GT<$x>Kر'j'*%ia sJϙ8ՒRlo=o4L?5ȕX4PD%~jk;t מLÙ^m/ePБ SMϼd2o{2[͆ C_Q*iaJ\R`Ua TޖQ_jZNIQ:7yO=0wfW9v}U0ݺ(#G\ KJ}0ԨaO'=S`%bhU"cQWZ!]]%*ek>Gu%U"XO_DNPK; +I:uXP_]%*kUWH]w ( R@Y pn2.ٲ҇W\S'9BR[un<2Sg7tpļfdH)MB_e݅3)y?ƌQVU'_l!> 2z}Zi$h%\Ʋm}ض>l[ևmöam}ض><вGhݿ_mWy@u-mՂVDm6jhMZH"B+tDHRh2zcRup1QQqdcM0XQ%ojE+QHZy3iy0,3s9aM yڥ1#^Ή-Sdk/dPԌZgM.zr&Mp lWc=f͆2rE؄ A2U1kQR\KLRcyp(1re2uc% +_yKfSqx*!3>>zѓڀ%e6#W}H8E̅CN2$u*QQc!{ Y`X$kr׽M'͡i@. HiL)Dǰ+ U  B' L<L<*CtTbQz طyƇ@#9d Z8qN}xB8xnOU ִiAby|Kl 2mH!-0=ņ`<0O+H ҲCh#]d(DK2e} n%bʝG5bXT҂pҋH3q 8h[nuCַS{?5 bbYNח"{R !yD O*0 Ir٧$ qy?'].;XSx坟nՅ:8O 7sR}5GZq⻼6vC!l|lH/f0d0ԙ-~`*~,nMghΏJãN&lesXi:(E0DFaQPy;;ç8!{z j xx]N~y{Ooo_]~z&˫?];Xqp)rJ7&Pt_{CxCkƛ M[a.h`ՂϏ[O)}zUϿ\. RE4N|'_An}]ū!QM%qwn|B`2pQo$d6ƃ6&(dD=sCf⑒H:oQ) AbNTu bψ[Civ z>T6\3C0hXrǁΊ#  4 )Lv v!.M|)q6] {'kz7 x44sn) sjS.r h;9Lw~zْ e5qN3kEn2Fbј)IxV=.-|sa䐤2UkㅑJ p;Q)q`9-RLݥ A_c<&ՠ/+ܣ8h#Sep=肳L${a)`9Mtk 5vr8:2'(}#wQ-!Y! 0+<`BwU)kU*밊%%V5S*%dG(Xʵ9LAALED-  qAHA*%Bf\ElbP5 ƞsZ; 2fA֞q Ժ֚kmү\ clAcF]%"YcTw"e5IQERڀ%],VW;U3 A*~~[Ƣc\g8˱*X&rH8+Tm^Ma=_}nLLrB!gh՞c:FU4‚SwJ7TiyY]&:TGN/KyDRG; )GJ 2SI8gIR;%yd&΂ ޞ]*\jڻEuK3fRu3 ^h<+ɷs#76mV$"TG5=fkbK(]*]֥7W9ntYfh\nͪB(YBɲݫԽ_|Sx{博5lhC}oy n!&7GTM%bvVzC4(kxBrcWL ;<.6 :$}s|5}sڕ`Q]qe\cMcLɸ B"ɑ4EQ]v|t;Bvq!#VNpOP BC 'рG)RBe@y yr"w| DCR2$x3roaڠ@Xi'%u&I QCvY$PNbXXhH5[sp6- N N|Yg%|9e|f_|VstRh"1F`P0+BRXa1pŮuTX'NJ B}H1 N9ťoW12#4OPc%.72fk ٸJ6,fbʌƒbnl񛻛T_qzz=u FWtˤP (%p gZ(i- ĊB)R(y !9{D%$ؤBlh/#vHHLBDNahW\cAlֱ/j̨;Ƭ.c,bGC!'d0UuIAR.% R,F#T^q@0!fE;$` rcHYI䠥nf<֜xX9Ia! b61ʋM}"vb16Ik0݃;j0\BaCM` 5GÕ7g5+bde2bOJ(Hk(iǬqovޙHp<:G W `LAdˍwS+96zO(c[湥H"I9V@z$% 6՜E*L<\( k/. NJwK9 Q 4֜0vp7g޴Nr LޝpptkM%(.o`M /[gxY.(/xA~tthv"8!F[lX:abE\xfґFT0ĸAdv;iy^8N*SԜhOX!1z-OZ͒Y2y O ݟJ$V#dz@#,JN'GHȼ+Dg?lEDOM7곍#vDԢe?rD5^(g׷Pd۽RO)up |0n(/^z zGD[ xQ2 D02cԊ/;?uvX"ø&D90l4 +CPVhxV촀f`K<E[w;p#p^D/tNja蟝?ЇIo8b~ nVt1WԱ_d4ѠG?YיSxj.mW0evW`ƽ 3 ; /O).IѰz<R9!u(7Y~- WUQ|x*]YɰqBDMx?$Ԗoft\Z ۫~}I[a}6Q$Og6NaCQWzz:}6'?|u7w[^Nmk -mO_Z UӠ|MM!)5H T/_oޯhQ|kqCV2>D3]v>LQf0bYt<'^JF9}*2;3P2Cxq$閊" +ꖯRkI!| F1\7_Kwfm`wk Ld.k_df[(_ҽ+:zmNʾQِ8.F=s^"ey"Fp?#W <DOj ]8IFzAx1o ]#J;-L(!W&,iƒ6<ӓ/mo.|]հ2`:,~[Ͷ~UW-#˻Z/Z U:-:V| %P˟t4[̅a{N0Zc<1^</Noʋ|0ǴԾSOESrNAiCLfz~&4KG>hΜw0ٰƻonRszqDvJѱÙeeP6xt8ͤIljz4 *m;yWS#rw;4=p cMnP=4UN`(DeK찰`6za1n^jt\v`6f1Gf J㖣gXp2Jl@ y;Ia{èSaG-nj:z]<+-e x# f-G)i|I k揨l>öv-VRz#f`fUD #Lf[̀kYj|\ ^7JFqoybeMuQ\e^DµĚ̪ؔYWxWU}m5ouʏ jq3d8_P|vjs"It%IX\擛`nR4g#v4 d^m^g ;+*6L0 QRP "wC!f;D;-C%,x}'sZV/[Z]'l{ߤ0^nm͍+;@>O"};ZYK0X`"k"QKiJĨ6xΨPQ>ߍ$ܜ܂ A;Kn"-`p)DLl] {vK0+?T IQI)%3:5>i}Dcffv}rYfk>lu+Qߧྞ CS_J#68R$#Y@SJx-)wUmh=jʻ6u.y(nn·m(4< KSdu`IQ:BA흊G_?Ïհe)vݏїO$rii_+MskWwLk\ɶcZOĴV_,6FIrFfkJd[i%m/cFX/ʰ7`V+|Lͽ{,K cU'Z|cy@dQxVhnomee|vH?~{n,ߴ2 H( 3 I jNc30k ߨ 62(G/ Z50C3fk|'zx2b.͵g;vӔSyγ|"R1.7J,s2JJf1:4<0KT ʔ)Sd;C<;=Yf#dv9pHJ8@TInUX:+:7guD>n9ThDSنT1]~8W1CܺƋN;ɎnQA*QiZTH9 R`D$w#GOK0єu="0]A< 4+2$,ZEzםs0q/m#J_<&~a<쎴aGke?HcE]Ih@F6Y@  `ЍS'2q@Z;9bӛV !DHH!I 4HēFJv Kԝ-aN=o+[Wԭ)%GΛ<_~7 ;,ʢ}oV7p BLsiAB.  '2u>qBzN ҒP-gُEdQJ}Ȣd98AJ~ƊZɤF`)úY %OPQ9GVܮbduҥ˿@C\Qt`̄j Lq"CHEJ  zv۪W}1Y+07U}qKe>Uj컾JoLav{|oۉ+_8vj2޶q{[{|o=0* ,UPv p=t2 Ĉ-ӄO]W Z Ȩ?WFY:tcaJu|D\.)\Јsfʵ!,נFYS?,CUԧrE잇0 Lu롓bo[ʚ L,-yF9AR鄡->Ԛ X- F?ehfW!TZZ3{c`bҹVpb#V{Ӹߧe,:u~㿈(l"A9 W]m-'t$J8?j)ar.Q]9 4Q;6Qby[?ݰlҦ%6|ŲW5ҥ B$Q9+ 35H9@_T3JaJ L TF2n~]gOVNzO2 ~Hu懝bҪUg)FY?RtҪ$#=zhs%.Ͻ{MmCMm_ߵL_wfLx+dgUl,~w,G(]n5pn!)nm]|Jz4*U]RrZ//jk%Ќ,S*|a?OETj "P=mcKud*ϧkt94vM猽b{&7}&_N=OmU0\EܙK݆PI ^ݷn'pXa`R,EV[sﯳRQѽS{M޺߮KpRR"N*ׄI'\k=l/8h_oX'&hK5H.RaEPTxaWks]:k:J~&"!Md%lVL:b.BT`v()ޥMd4Imܢ1yBOI`O&)+ũD>%i5='R4#Lc#y0ryn7VNsv}jї1U ;tdל^#:'?dCovapߤ 9%KEzM h{ˆRݳRrwA c_I$Y@)js mY3scUO@ԆJaKOq3`\?>g4L7;D}7Z]}2st2%qHҲ?o"3Fų@`و1-/=vF٧)nlܗŘ~Zc*:Wfz8>ܒZt& MSO_{yYeּ K~*L~_-d)ᙞp3PmrǤ[>g?LZ=ceoK~y5B-"y"7NQ/Re*E9G-ӆ>|ZGCJ9ō8ĉm=3{9A [N>ݚ8TWp  ƙi0~y39t+y'z$, #5I9ˤV$:Xqi߫`l )/?m Kn"'Jќdsk,Q[egWW\^sxVcʊ裥 *&wc0|5a:Dmtk d\oQ1G Em#"(x9lV@RY H*nIcAZg=?Gc"jj7I+5|ks=@C{/T~$x>/D}JphduM`dhbX;S7arS2+Z+] =?w`Ĵ[c㍬0FDC6ݨ]+`k U cJ#68R`z' #}Ǽ5}"sX G (Nu 86MGo^sV0N8Ah%z퇨^NJ`Ewlݎ]]߹Z߷z~*~߿*uGd@e68BÖQLpP$ H[#+ 0aA˵ / L1<= n"$xxT*SDF*0`"{&SL \J,h/eJ&8x@n,s$k{5q6̳ޞ`1o? CLݮ7tHi%v}Dp5_Q%=px RsN"LN".<3a Ltv?3,36q<%!kOԜhOX!1z-FYr><;"Oum4#p\0r(9Zq$Cf,;-ȹLpc[dMԜ5o=bcKlvSqHdhP[=]/ ܘ ![Kmx2 y^9 @ghE ᜮ3!FpHaz;/jypps>d(>Q7$ei-H[:FE:2n%AԱ]nPߖBLWW=)vY*=műή_o[Ř?8y35K,WT[29F/=WU%;G*|}^ҵwܴxĴaS0M~YSf)Zx:IS߯jVDn="gK,pj.#QӮ8[8avT|+F/tQݓôTX#8tY͸mOzŚgs`0|FYy0${5t.ݔY!_ce8$޴{SmQ4]kGr%;~Hle+A=oC<g8sBm̈́}\fgڵL/>_ي'Z*^Lw1"eql璉Kv2$7'+RO҅mfdJ^X"U+6ir7Pvmv欌4lkNp{#W0}lWNߥ;Mjo:O5sj`% I*jref]Nj5يdBJu4;4z]^;nμjO|w}vz"8{( H0,޻O|qm׷B OW ˷n8܋4|jI-ZU͈H+Y,Sٙ>:jm/|4uWk^N.|MZVWR4N!HiXhʣQS~@CB>|(Ҙgawz/pox}_P~_?^qa^˳ׯ#ꁫJ٫U"H%Q{yM[k5M͚hzATxiow0=o%anځ,?}7ߟOBr3o0skJ꩸e0i.|2G, d+z5nx[bC*! aW!X4oJƉR$cL3 ~dCIIF?A: D,a3=Qzvi؎|(*(1̒ehg"HSĐTة9=nN{: >#56*b%i c&2RKa]gi<]qRT1X x@3a|&}QM\gubMD I3zG@C8K:@CqKRc$$"MvIy%t _VHT7rUSʝUss& WBdey "e2w9R^f;96Nﭏ}ve/}o ښ._ f9D!8('4Kwu.wMcmwƪ`N@7Cn 0xm>:>Y˒u c{fd5)ÁνKtӾҗZ%IA#XhFIz!&9sW XQIkϧdXټ[Jent[cBOQqmb4K^9 M>_op(?c$1%E1+R*Iw9\!yRot"Cz3vCeƻgUla{DPK`^N]-G(Ǘp\p*\S v7oEx "aL&b_=9 b > Y!1kϙ#8IDeJN.o>U`R).Tr)J,6ΐ@cBk+4j&slټHFKemmMIISr>vڻwꓲ_NҲcNN֯4ҰCC'eC8TkKteK;oʶ#]㓌!{Ǩzxy x^K}DR @DPAZcy-:0.D)AQХ jf)P< nSuyo9@"H6s#w;+UX^VgK4sW,Wv _pWյ˛͕uzGW[Q-jW8z%>ਖ਼ѣ˫|K7'Z&. |_)LNbߐϖŇ7ط]׽|{qܳ'/\~vӇPaû7W.~|GU,Ymvw U|/!Y9Y"_e61x@o[(c F<ׅOz,guv_A(ѥ[0iǖi=R(iMw/qmrwK]{?1ړz<r<@zy<=M}bޅ-O70XN5duqycS)gLS)m5:B!SLPX#h4u .l^1*U`פ˛%{ +~fvW!mtEcY&<:QNm>?Q#|Ŭp'R gBoDnczpNʕŠȬ EY*e"dbPZ@.gϢO!K 1NEK"a-hM( J TNx Ev]]8j Vd~hf6̕\w7wmdJƧN4r<&Rr`bCVPc\ɔL59cs$GSD)RԊ(Nx >y1’?IlbFE8! qR`02A*COsE :76w/JPVxª%Ŝkӣsgi(ROF*"kvz˛XS?z&2IהT2*όYEh1D&aJbOHG}MjL` 19jcM&B)314n`U=*>!D-,eyymgVMy>f&oj0e# &t}VW^!0sє\Rh1,: ?^!dt hDp|S|I@YcB,"A6x\d SÒΆ0>ӀG͜%Fd[ҋ<'oy<49}jVZd݆ ٤hhRM$T"7jlsasvgX:RZ9oMdt1m8|7,4ƼK|Pnh`$1-x_Ӡ]smL+HP@hP}w/'49}w[i4UG'4-'^2F'sw}^}i:n\cwG~75_??Mѹ={Ѹ%2[g؍KZ6PFoJHuH̓.YRT3,cG%ZAŻƻl̠oDi(i.^6]zMt oj> kQT1+5@$S00Z 9JK09ӭog>qꯘto2;~1QZhD %5A*2*3Ζb"#!DcHo:pG=7 $!iV`I9 P', uX4{{1Ǯ%9Ӕ~=%>8^"1L9%$o8JZL evDd qokf店D2bjB2 h!+lQR"߆Bz E:K0BA2)/ cfCgZ/m>h?< xiSzDi1]TX|`1Og!→>?e}=^_O };dk?cc:Jȣ3qt\CUA0i\Ohյwgd*GIbKxla+qbX:|zv:4hJ'Vfg62&rQx\ޢGiu=WV\?9B j%ѭgZ=Z^݌cʏ/'o/Jni/vz^Jl$M4:ii}" X )YDŠˉlDOz8`gNL&V KO=:f m1@ALCӋ _~֜}tQg=PSڡ  M$%S6k0y`})xi{/M lZo8{yxa/ע~l)EH{^>D3Ns>_ijoZnm~ 4_l̛4_Qjh{y% 7T+̓ C^ Gxv9? .uOƨ=xśX jG7ɱU#zt 8Ye=aqF*LhDHg JT .N5e 2A"HTZ9`"ZR rtyɛܩV~8S!@!(LJUl@FkFIPk;21YI-aq\b5?͠\HV=\ŕ+%Q򪕩kp l t ;Bױ..|Q]xyqQyj_Єdv9_[MP6eЀAQQ%6ነ J)5#B '[^UTAj,Q*Yզ2mh0[-l DOMITD鮆Ǝi01;jm߱"T,)!H&Adyq*IUZETg"Nr0< ayY5D2K#բ1"֤Hl8֨3LU1E#v&OԈ Ո8h-_dZ:Evhc1rԺ&:d^]%La,mLb,lQgb!2ӥĖD&l8kWY/ׁ}uv&%OՋc^.W0`]ȹ 8[Dˋ 8l$KP4PgAz%b_agT}C|>|a6ʛ8w  l\=L7R/biߑ箓,ȲoLJ& YD-.rzFӪmY{Ը^?7/K lA<) .& Z*e=YP%3RdSeOl O2ֺr E]3]a*a]Rpeap!;:ΞA1|ݔO1-9b}˳OwxN|>Nf;e[ߊszEED C7II )@"%N$|%% A#FA1 fA %guT 4_#Iq6-C/n6|UY"dg_vU!EZDR9caHy骞nUA$f ˁFv 1%ira C.Fp.Sp ]@ɜKq 0 ) p?5/W@"}?AO>m蔾nŶOV#%<Щ&_8d0 fDJ E,eŹwS:IOKcĔg=xFSJKcJ:Z )!eEHND;lFr!{ 'ǁ웂LK(h8^9:oFF А3LѤHQ:3%/Iwf-JAMT͝ g9_]'VMeUɊjeO2]}VJ[Ϸ4<5FO&aˌf3ChżG{_n 3o i̐ڲYÊZ딆5,_mz|#4g[P~?y+6֝ݮ,],A.5#79݂O4{< Z\dBd ^$(NJ%d$elOڕmNCѳ{_ ZYI ed\y*E#䓗"Z'8Wh@rSrOz;!=XN|}s՚-vn[nsU? zR_~kLJ\d3hNpuu:A+zQ!gZSsy 9-jZ?cThMrUٝ*[]2}fOr7дʱr ۅ*^Ca /w~ =NΆ 1ό"W0V8aRrRh6Bgs5b`8[{#>S: %!1{Ϡ PNWhF-+l 5 h˗ffG=WȲ׺HVuԷ ^xའ"!Y\fࡈ*f6e,V'ڑ8ذjx9)Zu!ڧY[Ȏ gyX]c;")i5MjjO*W~*KȄ\n3N|.\x~-7O½g]N'OKCwXZIP~k^]XwDo|jb)ta5NgiuwisdVXv yJsᅄCY :Ni(c#g9Huϱw]J[@[=Y=ݨ9 U#q>kv\'C]xb hKzVҺ<9oU41$%hQSJp w+ vkс*y6; 2-ك1"jѨJd>lgp{/ǣuYwci>SdnY6et`2u1)4\bxGDuvU] gZ&^|')8_QOaFG$8? a34ӊC;[h~JoWC5EL3.u>Y=.د~'ӐQW7Awy1d_s#~v6Mcm+<0~va :~U_^2_~|c}3A>57dJ'xT}3w ^muS9/|y\6VG7ƣ8ۋp00. VNCʃyT,sM"&+N8avGjlPp>5j0[q@|pZWoo4}s}~1Vj)~O @A),nç;$WmԽwute1K v3tp-tfgJ%z:@rC+Z6k; "Z7B{r =3%޿w` {1`J ζ`=(-]!`%CWWt(5=] ]q`۹n+"utE(cZ{^c*'py,չjn;vtai;ZP:ӳ!e {eռ*!vRjSMU/.Dku䕿gM͋޼y(:7Թoc˹BM\"ͪSޛ V ZcMjA=_bf~,.U>_ab0QY Nkʛ֗uML*|>W_x~qw;a~7tn|`+3z57ohmu2w5Mߣ*$U_? a3ίXWQSԲEcT.HnQ><0˳ò۩8Hi ^?{/s|D3[;ȑCnGvn+LjO /Gj^¦7sr'eϥ-6zA ڊ$@'`!CVE[:?Н&x@ѾOЎ'qNgO_5 h<8 1E''-"<+ :(L q&!)g 6xA͂% B0 re|$x0,%?Aف]5ؠN*`vFcJ*s(s]2Nה8Zy `(S>{WV]59QC9F9DH@ ) J_w; \DCksBnqtyy3Dj.TL mbċIlЋ }f86%P}kcVh(¹f1؈y))G'AFY=&M=# IOvrZeHZ`5҆*F:͑ jO{!2 C_ˏ@03 J JDWI ϙͅZ8ZDlQgH#1'$K?~.|]fXG%%j0H.dI6ˑO[7WBPGQ XƩ$zH(mVh$WDk <6h%Fz@೐fmu9Nǚs|Q %2 Rq.2"N)A"Qs4_Je@6N+g :ɱ$WE{ bb1 +Y2B133qa5_I.C^ cF-bڧ`.8;Y`P8N!gG3`^L8m %p Η?EDtX*D%Xi- <]G/e o|`fkZ9m#ȧx&EQ,P`]tw<}_jɕ8Y #Y8DZ_c͈s lgdFN 噴UAŒ5P|UhrJk s1v6P \rC@P*ؒV2j) rc[QO[TuLr抩˔m;ΒI  .FcT܏i,s J W4"Pa. W8nvL@:9.g !./5G0g JC+d Bv |Lc%@ȂA E UqSU5J$Xy޹TEoQX<@AcgG̨ ՄÜ/ J%[`LREٔ|Kp`Rp 3D6lFQr V'J[e@SQQ3Dpn7h,X \QkV1Hl ?U {ve"R .-%=*(Ih}^R6Ԉ &bqѳT+D DpP()46 Ja<| $:xeH WE7 JΘ @<7H6~[v[~2mĬ*$c QIbYj݉s!u@g4ö4^ʢkÚ] nQA&'z|eM=B$q`<z^x1u@KqY=PۨdpҫhR0 Rn dPg r6V[/5~a=(9@i$@2ӲB JPaZ*67"A)ЭLx nEf06}ƂQ,TG''iVp6SRe%)8 dx݇[O},~'SaA[w5}m6F84=~.-Ũ<$ x—U4 ܯeL)kg  %tMކFL3yz5|< hwX@zT̨FZNmJ'М6(6.63jAqX(&RPƀ)E+Y=Ƃy7* s.}p^8,*VRXO'R% fcI;A.{mtԚEui`J fB޼jPR\ ^`VAc-B@ڦgDE}׌I'y*0&t.Yu0qXvG-s m娝Y.eBAt GRT", wB)Wz M҈ zV$bʅs <@bBv)A^g).Xڶ;zndjH 5j*\#|r\6gR,JLT@$4.p-c|fAp^!SCxYk}(Q`Q2 G`93#0²B3 _XDbiDF:|%q<{ Ԛ̠!FXe(1TqFK*p+/xA\0 懍9mFِCL*V:LD/cdaVMjSM`$1D%N z LBF t^\ר6"!w*;c70&t0@5a7KBfu~7R~mF*1)tʙ9Ukx-} ^nm~~ 1P&;ť"qo'>~W[?|ql ) )]/A.6SNۊ/7gWz}{6F2[MV^o [݊-r.W95~[ҖίW?pl%ޕŤ~]_~/EAmsɇ:{cY%; :~B^YlUưSk69%P3WSQ5k:v%P R@f[!7: ワOKpۏuuNGjo/qxNg-]u]v*1mq}w?|pz2mnҫn/^Ù% 5y}8^yU`-EF & T$&0II*LRa T$&0II*LRa T$&0II*LRa T$&0II*LRa T$&0II*LRa T$&0IUa*=%@ ldTVylʝSIc3 GJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%UyJJ (ܘ?%iGƒ%*ڼ)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@/X 'u&ZЙ@m";%5@IR@%IFJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%Q}^Vh+~?{۬-5mo//7 ?NzY_V4K0hu2%:Np VzfHKswBɄAɄf_?:ΊWaWë]}ia [,9`T-UOmV*A%[igymێn77[? tG\:It-qIJ{(3 :LksnBiÏo_翴:֍Cv4Ca:z]p߿ >A_g<&45(A1pM[8>^(_&Cڠ GgcfGgs=B+}n ϼ:nX[/O-ҟ5#Ǔ A},cJ|5'x緟!?7r~HM[-uq2~67߁W~www&{oe~ą;?"L :Ժ45zJ*%H !U-Si[R)++ڳNm0'M|ħ3d2k[ XFΚdU5I_D\(kݽ}L#ky#km t-][pӬ7[MU1tmN=:|{=5/M(BD]} ],2wRmdZ94G/K{n'd&M;@BKӶLRB4tmۋ;o;9+#9 /{`9[->􃐏4tq2 ]_}8ٞ EJlHTw\sPqυaBs04JJ9"CD6< A`;8b+>p. :oGM9O*J^Q} ~b:ڶgm=XwQ<0z=QyE@9q\DRk#9( k&4YD=hRKQքHbpD&ǑF г827=Sn3C٢DIu2y@ၢ00HHL a O * u)ƅP҉ 'h.ibMpFFF<It0ߌ3TĕA\D].J°\@N8Ȝ)4nՎJ,Nhp6$z 9λ-yAڎI~U[ܟ{D9D/5y>weG'R-ha!Drd r#H6b xG#os 9;!m^HX":)R8Hq̺jZtGC6ԃ=m>ٓԇ 4$ Ÿܒ)FϬ&@äsN0J S?bb|hZoZ>nch8Zh+o,^+s_2;j4nJ1Yak, 'MM] hya,gp ԛW+2$׏km|^Ҭ_&%RCmum_rf {TzIծ2nb,+*hU.e H׹ڦ㑿Z 9Z!V ^$`j=`r?2/}.F-J-8m/&U$E'E6i^2Tk3]c1\9އqe =5]YuWR`j(QfsߖӔESfȖ?[+`mqMP_vf0?a:=G_)~Fo]biwCyjiRf11sOw<Ɵ]ޓ}u v-FiމoxHMlY1AUi b0l6c֘dԶ6>AD%vжA̵36'j #$&#&hr YtKA9œB 4 iw~_0mF3 )~U҈ FYƙDEue ^sd{c9c9씋<┫k)_f@r+fj!ۥ(](>8X (( 2xH9&=E7By4A26L胎KOvƉ2hi\KTw#Y`$f1{,UU$%VBe;^ N{AVc5Z3Mu[k+)WN ѫK>KE%cJoTy rKäbƉAq. IQ9)6 }4ݛŅ\mX呌!}qܾ?_hwjrӘxTcYVt_rT- ќAOhq+L@40'GEU1vDxyxCmEULȴ<_tNPEM)!*.3h?ڋ=+B=XWINSEQC4{v/*4g%IEj~r0Fna~>~y.;՞KB u1"A/ʿ-$TBESjț29rVS`>gn|u0@!xvǶl"RV9A'$JJ퉮*.o+?$''Ǚ䢊&bv~B\bW;ToUq,̾hjWIËK%e 9ߨv>ݫYW^8ufצTE9B)Vީi]eK}Uuˋe`gKdqF4s.m M~v_d18y!IΞ\YW7^cvL,cqL+5:&NWDϯ.x97{wwھdW]*%ҢyX-%wәϏTCz@i~KC~B~hR./ۗ߼6|߾W߾>W_ Eˣ`#px][NkFl,᭛ 2zoXBU_M7@RX>Ck(?YlNBگ41PG j'lùN/:{+Ml+zFX7sCN΍ 5fyVtɞq'FdX3xKY+TM] Ϫ/Ȝų+L8#c<ή3`)@oFa9Gq#ZFiG"VT2UJpe8 ֋UXgG5*9^*UD)"TЦ*m֓8[֜ m{?՗oTwQ?sUe]nQ\\&Za˟/< tP^tB $mQGm?KF zWlxq'존@H R8J< w<1Gzϱ|j;n5?Pi:ϘPI MH nVd$+)7ߓdJ<(֐-eE'2Hq:1BIGm8nPsOMvGO CFr!Aj8b$2J*Z)%a??mdᔼ M#=@J­N?T0.y$,DpQQk,ԛ;빴8OMs霾$D\ Ye1@jpV1G5_ 1d"L"܉$PPɘW!F xa iMMiRfF 'ݟ"~gkš͠N/. .|:m Z_*MH'DB$U'} (Хe"[:ld t4CJ5=l=2oҶsn6rZf>@Қ9z}hV)e <Fl!"OD@{KZyzx. j=&ס$;7x]b0x@(ޟIqrB]r(V\>Mߵ6y;t 2s8c☬;2 wꤕP<ȧ˕d\X ^kRnh2h{"SO[RCwbLpG:gf Fp/>}Cua)9DsgT,ʐgB̻UBZ|듷z|OЋpc(D`,POY1hyQJGu q׏Avo@xwVג]w[YZrBr%M奥2כ|'cACY2RpJp6mUq ~0v5pƞuO(A3 *J8pA8hYqjT Ŭ ::Pg$:J5vG*E5Yl6;]7C=ˋӺFUwPwumH5sn\5Oaӭ f2>lv:8a-Y޶oW}a0..k4v:v 0v\ǙbuqlmǵŤjM#] DRTIp`shEWE%>&kh^c{lo}6}n.aqM> qqʭ}ݣQ"_}ꗿ8Ĝڟc;_@Jm "=DOś<bB-$Ӻ^A\URUJ١E:jcN}W9LTȘ["97%B!! 0$ՙu3qr$Sˬg \Odϳ%m{[7ziNs>e^n,(تSl%UQX0#-dɧQw<oЉcv {Aj4K~_= ^JawT=v77Ooou|8Y, 1/q*9J4N)CKToFGDU{MDl;}c{Jɖ=ȿ9L苵YtG%Kv9 Nf5Z"yКpL!$?-!o7/Cdu{^fOr2kֈa$YN(%e,[޲Ͷ*y]ƀ([ֵulEwuu->Si4[g۠[j`}na+VLJ<-nЩ<~\ՠ9ԁAfA+բѥJ@͑?mi;Ю=*dR%i%W4_lBLՑNʀ8a|%otzAPwhex/wwˍ.t9PX"ryRU$c~#rM$ȃgLoObd}*]"hnn Wo-E (CeSBogvS*~8O! 2PXr_,Tgܲu3|Eܶƍ>[3T\z+;}:ȚO6k;mV:+d\U3N|b}fk+c4ksBp x2p%Za+5Xv ne-vV⑭]-JpG U3؜\5sY \5k?vjVz•f1+0v;0GmW/f\2'zv>;,D? +#zڼ ݝ)ۿ2ۛ&Wg&̖sMڗ,dmsAYZZ1?ݨ]U0CL@1;Z))  .(Uٛ6Y&}]nB}P6dޝ%2Շ2\}mNeِdr%/qNZƦGl~}ͯf:47hm+MY.ܹ5ό.T] $uPk P6^=e*)ա(×MkMt򙛹d򙛵d=QYvmL{Er`-\՟s [\ $_4 m WoܘՅQ)uw]QGZUVp+;$VĤXC EmDt u5`K bʁ})9S#QAU!go+1"TFջ;h7sA>1hLl|: j,t`EMldUq(YM8ǀF9"5P AՔ*&e‰n'ļ6R8R: }lz"yh"NX\ *%(%/EMBKng-GWObÇ(OHMmy6%s?|F?-\G㵭Հy {q3h.Ǥי}/Vjq7n\̧ӲuA$MPIQ#`ZP5s9(RZ/m7S:Z.d*\B5)"YT*.'p!TUC7s么! _P&>gq_p]澇kv<{+ċ$WHOTt/jLRk&KR&"1JjFTGSSR8JL(DhrF`802XsVMl V*cLQukĶٷTZSd[Q\JT jm+iMF+r#qF4TrҪzfJyG=j+y4|!L>}x7?8@&kIPC.F#j6"h0): 1`>3_\~#ly~4Iɵ1Fb@ٳ4`)"(cUy]i7Ř AH h 89N>FDhB޷@%y*W(nؾ-ėzx M\.ZۋciYbN[@jw[BB`t"?0=9ݭ4plXk m;􏿩7M;G-[=[feYfe&y1L,6u! kmH֯2M;ZUHP])\=*$U[̆ΊVЪ4dO.rwt9[JkϾ+60>\~^ȦA(S=.>(>Q7$Пe~ښ/Mj S4d5p KJ[7Yvk`OwLz>]ݬJ_{=֡uoۖl%p_]On1f W0Ir +\5T je_F> EⵯR7eP\* ֳT ^&v§L+)Z5z&|wg]ӦY=RF> }K`}ymP_&fzh4({oC@5-sl i}>mi RfLKm.~oZo }ߓ6 0 իz#ngز&eV{Cż3h޺ǤkL+6QU0cׇ@=`GN%A" w;f猃lAgrRGV0"s2%vXXIFPF7 a5:z.h~vgRGgq )~\#m#y;IAa)M Q 1#2ɢ8ŋ|nfޤ7vM&zַ[kþc(xc¯)TLn oߏ%VT'rm$ZR[ҁc~ W xe>:x(ew"ǞdTÕs[wʅV4)wC/Ww9p~Unw]Zʄ`A}0yTQ͍QmLF.xJٹG A['zH=02a un4|d_c YHV^)y}9lV#=¼ٟ݀z/s~۫vpJDK;Tio]jF&Sa9&8AJh(CPQN)m^$\.%6!`BLJ;f2Rʃp-a{cvFΖ6|5|]lX%J`<|l9٫/o>Ī)4wM*trZVz43t]ns)2`V "]ōj<(Fb ft[1 Eixgn\ HiL1`F;0X wZIa) Ayӭ^X20E(BU%y ºUh[F"=ZɽspX @*`16ɑۄp?nMfk*'U`)kƠ!-0=ņ`iJY( u+H6EJS%>n%bʝG5bX0@7࠘K/  a+?,i@P,%˪pfg\ DKqh&+CSIa`%ɾ'cFIcg0]s54WJKuJk+ctrWy(|Y!d0; I"r2F'RP;1iۓ ,'aZS  Ί.%hr0RBI.e5:z$„ y,.⦌&|v~d"/=TW6d1K0/ʕz|UN1d(ݛsLo^yv I >_E ꧭw54 Mb襵'kؘa9p=ddJf?>,K@RX>C )?i,|[iSEN3Ww~6"2W `~w%D"Q<+5.1GIdD}v#%tJ$S#-AD=V& 88KZh'}jCG[c&H K84YyDA##ؖFQQ0]:*V3)Vb;sr+xC'9 tGJQ9Ժ,,yiy{r̙"lO#hL=KDR c^[GQhU$BaͨR0$DE*jqE !I% #5ԉOx"ıH1v&x;V:#gטWU>B`R4p F7\jOA*l# ^DE !gN}4WAIInd>;QX F  X 4 hw]U)W `JeVѸ䓲*"S>+dG(X;}J*w#2Z?₦e̢8XR^׌M *s.Qz$cPA基o5w݁r*di2+C Jf~EgeWAYgo/5yg浙E|SMnE0AlZm>-,\sC64u\}U<˪+ gyQl\^{nfbZ-g0@*m I",{Sd-Ƌ ѻAU֚_s'5{NzYhW9lց4-jĸKJѝ%/C%(4^#A:(EHi )񔤯$*m"pA {c͊b% b%LE9Řa)j.*>Eܧ]M.6f Ъ3,yL?u[zoʂj/)d(N=cXh'&$EV<1f`F@QAsgтaQ1,%PD*CQ]YYؖ`~G~MQ9튣ey R*\w҉)X" dKP`] "CLQх$g!i-9X6Kingmٮ1Pb< #d%rx*T=1^cb/01ל#>39mQ9}T3|J̉Td@|x@{mXΧ^4jf# `E;"ÄS"RsҟDł#l)a-(L@@\{SN=GHj#B|iw^9#Q`AFǻ3pvZc KATwfqI+ av.:#2#vby"LhAde2$Y2뫨Fk0Zʱc#_{L@JM1􋥅0oRDØn 8֏RKBB2[ ֘n&aDδA|Zs ӡ{$pCۛl!Aj(៬8jY/a??lp|ޏ@JB=3G{#3!:O}dή!j|!/cr G1ԴCM+K#  pe۫+w=d?ˋCƴo]N&@#9;}ec#.iwl85  v!3rHF{N4{p5yx9!\wڀo5:):HO}R)~x(hh<GCMk17 jHGG΄o1hz CDF~j:Ѝmm u̞e'^`ߠgN-&:w\ϿM}ψ1z%q@;C"eIN<` D~֋Q.ޛ<ܔfpDCoe'Q'O9_oL71$U}YoВ/}Pz_g/pyH&HW ѕѢ+J(]uIStY'NJ(}ZW!E`JpcԢ+`KוPnݱX4&'t3jsjmyt52+7AWjߪGc "] jt%!iѕ&[o=T]-HW uS[Ù?>id[#Q\ލtcƮ߬.ήlrܼƄc菜ݻ.5UiѪ4:FB;׾Z(^-^7purYW2۳_7u^#gG|g9VYMJ{HaKO\k'7~9?rB ^Q.rs55Ey$ۃ,oV?w7o[LvB>$ݫцI@{KomIC3gkuRS3Z.W 6Ea;HjgX7~&R9JOZu9£=m|i\ nl\gRR56 +'E!2GjBd{I)joP] nP3^"ɔ+,QWyEb`] .Y-Z_|PFZ<[R+U]1g8\Wp{uJrtV& 0%5\Jh/]WLFWKU~lJvMAA,]WBj9JdE[=]Lcu ՕP:2rtwzd]M m'W][Q+}$\4jt%Ε+j°r=@:5ܹ~N}JE*YG室BX!(@Qw%HikgL+b޺jԚ-1Մ XV0 8Vpޓwm;%.IMR+uj:ɷ<=\WBIPu@]lk銁C}8JhCcBj1DLt%Q7ydڄP֡%*$0@t`] .] -+,-pճ*&鉮] m(](cZR*b4`"5\Jh92ƪcՇ#0&Ojpapn™u5f4PXt&*T][hdKI[=&d*'{u1BEN1 \(FK(J-k,:&]1T+ŤEWB[+ UWKԕ 銁z5\R3+sDij hEEbXKX\kJh}ѕPƪ%*GDE`JpiZJ(V]-PWA8yLӮFWBk^Uud; 8̽i44$Z3F] UWV=`&]1#=ܹ'\QBu%}>(?,GԕڠEWBKt] e%*rh]I$Au\]I{-RsVRuxLNvۙ`g'E-'E:%`KorC oIS!{2jt%iu%.T]-PWDhIStuZt%YjϿk4\oJhSc`}\EHW] nT3iuJ(U]-PW#M['\FWBK] e}WKU{P+Ϋѕz5t6kUڱӑ ONs? Nydp-w5w&*U][$݅"]10U+{4P2 J] .-ڹ OLjg)w孂 +)9~l&ݻ jr[S MZ2Ϥ֬օ5swLekj-k/4AQ!BdFRp1j |!P"/1DvXE`"5ܹwFW2QuEhwDWBmʄUW ԕ.HWL^-LQAubi6'=Q+־%*rl4] p3q$-Z[l%*%[B.I85\P+} וP:nUO$8e&yڧ3ODJ,>j1t%ΪѕzEWBBʭ[@rtХ hP#!Yoi„r[ϱ 4-JMEL3e)HjzH]䓚V-^`>?*/ ᾮ)0Դju*'5:>lo1N!o>VЫvD>0yRЦMC?CFoLsL\<=c˘|/RoyWg L8;=]藓/9;;S~8-ٜ#GmA>6g/ܾ,Iӻ?lRaH7д$g~$޼~k#VY7=g,`Lj0wdi]* C(v- Y4&sO m,CD( ˼Xu<&銁] nP3&ɔ+Dcf{BHbh] .ѕRѕPZ"8Y;5\KZt%TK uu{JIϜK9Q4XXpdД]Mv0wW4\w{n9'RRasajߪI1*gl{,] E-ZGJ( ޠytHƂ"] p jtŸNu%DUWKUDwlaAZt p M!{^nSBgRd~GT_ YcU2T+YaIΑɐYpQ(=D!F2׷T=r0uRë-Wd-4wo6el\̳u6w []5n~ܛYq,'MčZ4+3mc=rkU]r4KQ4] nP3,ʹLJUW ԕH8ѣ+EWBkJWy+rF '&܄2jl pnProVJDoJu7޼7on|u_ͽ>qM>o룤ew+ymw _Fɧg]pW јw-"6WgX6kwm#}r|tᓣdc׿lb7k㾼w{9ծr4mM}5o~#U7jZe, :G O{{2 aVo|?3y|2oA'︠ێ|?>dz݈ˡ量ߞuGwdƖmFm28.[mGg[mrޚvz<>fdG1k˓߇?vu3g}MboGC6_ W7lך- 1 F'7ı iS&}ַֻِ>"R$[յǾ r|;Xӧ6Bȝ&zrG`z}?}',_ӀF뻾waM $sLG2/cM rCqҺ`t1i0a6Bkɶ}Ǖѧ0{c͉ZZV҈MmٻwsHu_[Go008@HQ.ƌ)5MrsC[z3W[3ǘ#u>mط9pص9h%d֡n6h =.gm--&um;rUa _cl+*.!̞:Տ&4 h-iЮmt;8U& ' %jɭZ<ĪuUJX\4flq%NX)j^ /hVIGH$5$YI$ehm@Vӥ[oU4^"GBZF7i QH6 WZR0UK mr8Xyν=nxe}1t9iesdbT PC!lJ''f= ]Z`w]';-,fgm5EZSRМ'ՓFhׄ bL _&lʌؓU IE!9)j~@'ڪ+-h'MJTEJvP-hH &Ш3) @)w!kM[ƮXB߁uE8ij)ڥA$\sG9%ҋDyŴ aLw(LjMnƢf$ӽCbU;p`\t%i#*XBc@sjo6֞Efa+5i*5c %_T4/Q4t Z،P-EC9yy|d^ vKnj-z5FC-:mP F8XA Vp*-]Z4jԋ+Bb)A( 8QN{*h=uGe(6 qLUہ׈ zul) ݌KC$H0r(:fA٤g|Itb驋l$Tip/hm.?!t+<!GTQ~ԋB۵ylk,pRLpy|chڌ 4Ϟ<9~K9;ȃ/Fj;0% 7^|w\=!_/bn* |oݿty=܌{^Զ\l^B]M9z#=l?ʟ 1iYzk:r}x_EqoV×ƥS%ID/fK9,5:Fh,4e\s# srArWM(`'CtE޾ ; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@(zgf0 bGq6N 4B(6a'qy` N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@zN J@88M\@VSwJ @ZF) N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@zN DPsriRqqq'J @=$ N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@z@NחCķoGOԔz\^ԿλVu^n>bebN%K{ǸAƸ1qɾ >ckSQfN;`fCW ]Zy>KBy/áhrFtEM ]\; " NW2Dφ;N:6j~tu/FOKWC?JO=J1]KAou\l%A]_fuutW 1XپDEYtqş,/ qtzȷKW׭`{O 1y;gĐz l*ؔ^;W.޼]q9O۟w'蠏_* ٭gm4~_[!)%OM=So?d*Ak"q=0-Z1X9ԍ~6;yKhm8uyK(gyq3#s%Jt,qNtc%TtEplкA(=xtj^tr.tEh7@iC+maFt#tWSj.tEh}PCuMX-Hee}k`Zr nA^<;\ z/Skh}Ec!Sm4mڔ\>^U0gPPмl[][_eӃMvwA@^Zov|GUW?%* /歼ryܱd^&mz%$e@qozSW)w>#wOow&HqºCjfLs:w!9E]-5 .ۖ@fņId-@'L7AHrRdIӋUAHW nJHJJ5B]UFUv]',_ՋO)ig4[yt|33w-mH- $^sDl6LiHewMD(:Xf*×||~ը񝻹/HJ""sZ;T(*Di f!ɫMΊ De_wK7C tdyNCoZǽoMJsXp]A {3ڰϾ\_;Cޡ͝\<zkr*rm Wy`_m( eQt64?'sF}/j_Ϟ~0|1 ~sƷXH! b%-[?or勼s=i.>>za@Rn\y^|g=BLjb%tJ\P7+Ck-Vj&JLku]IM&2B!)E QocNl,V~V )ā;nOjhlFx/) \nn[򟛗ۏ\,Zۼ@J3bœ[qM~M@P*mBbQeK@ĬJ~Fn;z#v$PNǎg?8m[rB ^~+Mm*_,Ae[٫u:+!hX d(2*sN}]rF_ʐ[UY :BllM[hgISUBB,@Iw&Gݖ &ΛSW E0{ 6ΖW=/i_BZku!yEuѦxeDzC"FݭF1p]",~sȞ# ^ybY9ɾy#&;;&-䅷{O[%”f`7ٜYw(%Qsi=\2K)C0M*zmX%sy9՘# -b'vroUeDfXϕU`أ"r!gU.oQըdA(j1'+j p.TMuw V&ԅԄ\ʼn;ݖs`@) |>"N2|?_qzqdz;srClMRM9;P52kEԗdW*6WN1䐁rfZuZݳ?[ʼ,ZNGxFMmx冰mq=YV-m\m„bw s0! 'ddX hZfJuG&f*Mi[V}R"LIjm`t,NlN-cmAOy ;scJU;afnByyvO%4l }" KTl,Pk*FD"q!ZGk5{.QS-"fR. (7^%L-'C'maKl람72ni@\|˙=`keo St H&փCtDZQ񵣶EmL]͕AgѴ,ȿqcV,1 ׻#c-hc,%dUJ+XMZ%`!Ԃʽ-b.nOИ"@qb-a!y'Y;r0An/n?9[Qw2WKvSw=:Lj㱼ō*v.$ls~8'}WggW-{=g߃gwbP߶\X<Ʃ=%^ MxpS ԆSjmQέtToD\d(d?],nϖ&рo=XVX3k16hHCΙM( DQGbc2Wn:"fȑ誷<* "ޡryOU,Co-j̄B"eG># RBPfcH*Lȋ-VQղ{wUNQ-7L2M!y19#o_j)+}u5(҅ȴy3lߐP>tC]1%6XRQJ oGKVA_j%S]һ Ø״o&}p^C37-6#'ͅk/\n@[v0FF>BPh]\+Mqn]Mv݉Dq"Lpb:6.)j)f4EQpceM!`'ɾcr~w|I/U2G`aMMpgEg UT>j 1[-r! dQ5! 8yBꬨݖ_Q˧2wCnrrTǬy66y仪zs*URfVqt[_s̪%<-SR+s3jEE hNr(ҍtnq,X',<)>:͹1-MWt@//b_ F2boMPb@1kd6vT^پaTЌ=ZfmtM0{b۩kTJ>C최WabIDZ;vP{b$^dtM B>R\rʣ(iE輱ޛRaVrs*" 3+:krQCrB,֤J"-g?.Vtf~.ƂHDqB-_$`%z+)۽: s*R뢄J["DI=V(qh Vd6)`qŵϐX!Z}{-g?"ޞy,XG.MJEa wI*<("E%X@%*2P ؋PpP;􄋧űa783qxx.0]ApSE? R[xu|&T;rUkbP7Y65 )'*1MEE5%M6`"0A$dN'` *62fFy卅3C`Vd*#pո3YL.պٷ2F;!7w bJzW붜.s GDJ.ɧ>ym!#Wؾ}v}_x MfXS̵r jM%l2ʺ28Pb<}j~s' =jȓl-_CN)1kftLDgJw㸵&U"$ ̍!p+ǭ8 XdsXUIR/%uUTFI#y<ǒY2#-T:cIoA8W,`-&ԴQ_ 5`]U-6m* =j']> pl{KUxCw?7ʨ{qj5 -#LFw D02cԊ4;cVY?4soˎFp,oK~2~ ͣ_'Ooek*:qwT屼 < gbhB*rK%(-BŒ 9`X[>Ը4.>u~v|b`DW>.~ŖncXoQqhp.`!|?<iw1LpYJ\3$}/71o&6| ;Iv^G2Z߻'i9 i)dH8)Kr6>gsʻL-*덆S#3z; e5o*T'/Rpai˟Yt Bf0cB]} ~ `8~.`"|!e3 U-,gvb5ƅzgn >βSAQvIvPr3nP{&}D#+ s2%vXXIFPya4c0"sZT;?Ҿq#lTY\πxae؀i q$! NiJia=fDf%JH/J.qL TlM$oza~طJ~R~}$k%|f73ϋ2lƉTSKdZRSҁq =|ZG^FkWve?OZLX):kSWV{)]/W3C.y>>#dYb?_WW{L ka^!4; :0>yp\}0TP.G= 92( F08'8݋?rePONK.*+\sߛӀiiKqM~䪼rY9 Q$KWERhۥ OI~P  M%I$חtb}AQ =~fEѥ>]jtY%7(Ks0#``⧣C KdP1Zj ۉ@/_yۋŷ_`.^~oa$8: Pz0_˟.ZS\UPެhE/qA.vkĘy²s\.Ȕ~L~~_u.E,xF`uڍO;O2Cq6'*;bCk|\%Bݵ)D"P=1F`VIQz$sGJ"鼕FI,` &$2. bψ4뤧pKq.^$qid"  (J2dx&.HtN%:rL&rZ=>ܵk CNwj,5?ThkxpZZ QR]'Q!B^/mM/=%0qF tXS"Dr3^)jR\i/,gsT P|DPC:ず Jc=Lz̥r1ê{ZbE:xl-u-V`N xDHz {FRYѕ)e@3 QI2+NqhB8Qt;yf50k %j/ʌ Êl3`NcQ9d*yxճx4LM{_uE<ʒ*]Y:we190IxMV03RJmʀSm|(M&0&EEkXa7"HmVAIt)cRKfH Ψ Y@S@UVK=mgM[䫉=\=_Wa|)à Io~SfL$Y}9gAxԿ닾?aGTEӌƬ9UD+kQ.Y?tt[v= %>[n vվZdJZ3M]zBAt?*pn ]Zt*$-]=G"L,DW2p%m ]hY{*\PnѕhɐN^P1UQJTB9cuS,F/M Kg`2/Ssr.`2X: {!R<¬Ny:RZTqX$.w;TkIFKiFIE>Ւ)E 0dj;Ncp{g)np0RN-:28)šƨ.MQ#Z"G]?GqPLUyS rBNWe⚴t$tšk``J%e{,Jp)tQdͱFp9j ]EtQ͓'+)n]IQ"\֘tQVztBt[\n'yr2f24qK>^Yb8>;N@ũN&Y9 0 ~:Cb ]t8~(.$_{~fEÎxֱ WKJZiZqJ©Ȝq@N O:%w> 2e޽ay#)yt1:?N$ϷkӓA3Bk2?rh4Y(uP\˜u]?jH7d?'[| áL4,eTXS"0XZ6aFY kwʒ-70J*|b|-._|iGtp~vFX/ʰ7h +|Lͽ{,Ab̰Z~`sˆ^Fc@( *#X* ϣ\$8SP ^ŝt!a˼'&gzؑ_u6"`1 aٗ9`k,HrbbQKMm1TuwUERժ,w4[o\ҙdUpL@k"pdT90Wقs܂w`l5؛y5[,9`ۯZ7Y}sEZz&_(֣_ܞIc[AqNAoCWn^)Y1Oh6LJ.ӨɋMc U5H`z4}OX(!>NZ.o\0D=RBH[hsݚ/u"uM״}&Jw)6^/UqϪ*Ⴤ>Y*&e_`bX6 6iE*)Րz%cF؝.a?۾1S1=1C%3T#]kTۓFwvZ>~۫H۫E7{z)OuzZP[%0Ke61pOZr+xk#2E D F<蘸#ZVxc{i~= k?ۤO@MfkQ/Wi'FDj)ŧdV k]hIGN=i*ŋ[/H>f*]~noi:FZd4V=칝g}?sdOJ6g4s|ßɼ.[aL-3lb:2nPWGZz.3@b υY bx#FJI7iP"%YK#z=JĉZL@;b"/ Ÿl_(!w}F8;{JXPxf̽R`.xAAgmCeNxɍu4bTqGQI C9iН0ޓBg-V׻V&`3CI"j1J*c€ g[Ԝ-5R eU αNl: o׍}[zQ=qXǯٜǻ\-y4e8WBGBp/r?}e. 0$phA`\_ɪ8.8L'HEG9aHuGF{0L Y1 тY`QYX)lAk\i?(]fB<(b<; .ਚ8y ʾP4Ё\hcy†s_?|Ҙf?}S/qQh JQEL,CpN许Ee5c`dxB4\WSLgR}rہ߼;jf'3Quij$χ(%q58NLb8NJMpǬs8yzp^0`V'v'r=z炊ۤu$^gYudf5,\a,B Y[ͧqfdCd?}.𗣁S֊ˁ "u:3y&0 Bkr جQ;0]"]Y)]jRÞ/ifl!gY+[JUź>D vJ MQܐ/BКƊV wFRĕZúŽLN[ΘE*:\IЄӲmB✐*/sSBj7s֧ivKoF?II_ydp&9$Xpqv59G $B]Nb0*Hۛ,樦}[][~JC=>P@^t ο\d|,E&\ҼVN 7F41xy2CC9e KA+Ǣ3s^TS~riSV4TȜUP a],+Km0ˌRYJ^JDB&qx < \to7G 8dA[,d$8@ׂ>3zE\iJo"-6/Rz-5_}R( fRc}%&)XlV pB>hsҥ471ڷ׼ ̇FG?ҳs_u.JJ R-HzT$l kqc1rQdN3*=/yG O E8E缇h}\b*;t f,RX(&ebAl@^\eRe]D[/-,ǔa =NόD#D2&qܹYy3C_wIP"MIB X!SF!3Ϥ/*)2 T4bL,ߨG2XJ,;g^I\*D&K<#e`| \ JE3lҗh(}]߅P7UW5aH &p]L1"3\ٮկncd(L̄YhW< xJe2L,D4lG6p}Ng. xp gS`["^b6IjvGͪn1KT?Q$QjhܔW㕷M QةXo8-2APjeM X*_D)$2&AI«FZF2g vj;(n='@CǂC]_V/B1*n 3^ ayeqX'6\օy7%LrMR5oExYv"Ş)|#\=Q:GetN[hбJݧrg Kl|ꋲC TZߗ>|%) XV_kn|56 mIF𐽎 0+/u% ^Ije> " Ͼ?*ITW- B-:\ Ȕ́Ky6hKRBaƓO)>)" "Gp;+Uz8'fWs36F'6F%&1?04n12gyf7c d_=RnEr>K!7B~H֏%t=U7N-c4#?}~NCrղ&HmB8oc׏5юm"AL;Kvk< ش75%O־Ϡk۠d:nH+\[sx/ڱ ΃' } .麘"7{p;HNhܱ-jn\^.~21|tvCg_nVgtqd9{|퀛>km=}CO~f#>x~|Γ;K.=2jx=r;w}nߜzjA}yɈ#MWޡwcEC4𕢁٣DpZ,&9 v AB,XAotrT&5Vmt5"!jz堡9h&rk\рD.00PrR$ BɁqu>AMz*cǀzo;>g/N8_ Pf]Mj03rf=٬O SaPLJ^PdrfdF _19 $H4J#= 53V縡vi8ԇɓv{J#X†f?|STWj1dd, ecmUD}YN?F  7u{9=+K,I1[ϣK,N$ G'jysBϕjX/O/ԃ/]:HVȝDE] 0Nc.3,sTg#PȞ m ā=ʨ:c沇C,hS8=M׮Oڶ׶!'v< eu`)a@%(}ټPZ`˜6L]3#CE$ &_ u#Y 69~l8881J)l)}Npx(Qce=U_Wu#`||k460VF=0Dl?vDDJEĺ;D9HxasQf+uXI&%*DDUtd0*-4I NmVˈ3>`DRIቧ ,i&4wU9#.NfKmlKvE0.;\ϠY,,pɘ,48NљPAT9N4D s}b[1Ex(v=@Xq Y9y?>fds=a7a8w} cXǽ0)7ZP=9IgAa5y@f<h 'JD9Q:D " /lC ˆh"#B(EM44Hs&z##^Q$yh6"sbK̃ESpr@T{*EuEw}w#xs|&ɇ-v ϰ ;lsK3 |(gSxrt6sIL1g=uZ ZgFzP# QxWvDİ/9)&+,\ܭb]a‰xW|?-Y[߶_q!wڅ?Naߠ F?\ B׻D9&ë1[za,ykgWN)01maΗ`'07E|%cߝV=5<lޅ0g4,P5ZU&s`MtXy}txE~qj&ow@ܢ2߱X<+X& =3|$WnG:cѸF=T΢6LVU 7턷nuɔMh͖ fMxυrl2x̖y&3nOQ7יߍ$-ola0u޻΄y&*kD&̌QF8XXNs=Հɀf -k*Fi%,+#r?^J;ҷW^݉ͶfSόSL۝ֵ 9#W^ RP)C6+6&N%c`Ty KäԥD# DwϹL*&&̓VOv9[ s4mWv2D3_{_n8٫/?xsj.xxrᱬ`љ7!9G%X:2DX DPv˨IoS9!>,e_Mڲ=so6\JLk3tֆ ' hJ hr Iqzt=V+=K4<VºUhz]Lg9z-c; Ԩy&)>)JYpO4m*K0Lĵ'Ƽ֧h)kEJDP1`n A1Q%_`Rn 66S1s?v8bj]4Z^?NfeQ8_z1&ȅ =L.fW= oT1"$+*/ V&ES#jM9:ӊsÑL҅M./Rߖ٫Q$9&h $00ԁ:z"7i'98;E.J*wy3tAXPO7ƋY}9TW!xS tY-< []ڿkhc`82Ӆ*Uq1wq{ez·WΖ];;-] V|uY<:w dSOlIȻ{ 'jc76a`Xap׉+pпMdgwy3]u6MnJ^'f2K`Mc0|Lh11h,MFoz"Fׯ5돧߾~{J9}ݫӷ?~ _`.Sh ?_DO V{][Nu͍آ%,h`{OTVe/׸9W|u ~ߛt4VxX:9EVMP7ql(LblޯQr"Vb@Mh(l 1uUmQvkt3_a-hC- *`0(ok4 D(XE:;So86n*otY4:ܡVu;㺣%cjU\ЃU*s(*Z^v*Z`hb+$WC+V.RiXW_"\eԂATeh.ջb5,f =)^-i&9^?0Qy" ዼ/q$0>7(eӊVJ,~9SL/I_]CЊ݋D?[ @G 1'rA)u5!#OZp)'Rm覂ҎS>cZE3B7˭"z Cj!̐ƞ;] ~Y T_G“xSxjΦ+m&%ܗYdؘKLr L65*q@-(9eP[VH/Qi%`C١R+[WHK+i$WH0WH: նpTZWhN,Dr9Bjl;\!˙@:bPF!yt;Y$C+ޓ䪃/$t*܄UyjbdKX&:XK93ӠJlKa+},d*ʩ_h}\F寞4.}Kk9J7߾9(nl3H`}<Іh,RpK(1 Xa씡(Cr_],k-i.UӤ~50C(wCi GcM:9hz:`TD٨h:&.)J]9P{On?VS"ypcңa+߮elzg`q! ~WҳS Š&,~7\v%5g z@ZZFW6ݻv*3TV.G6RV>]qIZ3V)P.=t`jS%کN6>Z"yA=9h~K1y'd܂b4eQˣ)Ii:jR[g&sE3VG=koG6H~'{Y ,!ZZ"ɶvu~Uσ!)jhQ1آ8=5=U.^zlԆvf7a>vsx_;?s~nID<ˑn:kx !!K4(n*!0'"32./w>1ctAMPbd\%ss>X)iWG5r^"qb:ka9(MkP)n<՞J1p.} |'=gëً6z_$vk~ӭ8b4ڻ!«Ex"w #sTkS10@9 s@E{‰TQ8#Lh$&91~xrwO8`ĜBF潧0mmGUgU3ݿGBP@' yliPa~.;Pn|{[^WW+ە7@3,vѨP7 Umg5:B tY#o{^דLr" (Խ92WnVhocwR*_&O%lg9_l{S {j#;& =.|=!|xd.+)!դ [lW+W'/B-%*+ro⿤tDT$zFܖGƻu򡼃(P)X* d5|vnqب2=iW⛁)ޱԌڎRl[&1q7mf̲FY))~6,);n>%'sоfړ3 /7u s|>j%液=ƀhV޳\xujO:_?Ouvi{NwKSa9TYL=:x=) 2^dVߗt"V$' ˩ ;Exh;量M>8Z=QQ&eJ,Lj7J5%Qrl3<^I^8:9ffJIhO*30THi_P7XӪ2K_L ~Pz},-.1t/ZT{KB Ś )/"@][be$tCY#+!ܞGt nX_S\ wB6ȕB;*0b)$c%dL%c`TeXKä @wϹIQQ[(zvΎ&)7=H)_1weH#w Dh.q~c#/h9Y, o tx- 3{G~xk\% LL9ԘsXu|m,o^L< /V1"ׂW.4{qF *ن)'얜]Ĕ3Wp;d7gpmZ(N fBU\V@&ٹ7> EB]^d,QRv;?]γt| ap놓.~WqWfMu N)e Yi.ο,׌MxV\(Uao Ԍó74Cu*09qt=HF?UWn4k}-ZIΕ\M2sNT2rFh\qOrfx;]^5' oZ}ȮUUJDeBK#ya$sB8 ꜞv1m qdQ}yW7߿o۟\Pf.^܁;P?O`e|K[NZkF/ᩛ _^w  =]R,azUOeymt,>U=u8-M71,bn(L/c7Qr&V,Ġ S=3~$vd&f>s=)YD3ZUQaQA8"dQyn;I2SX*0@ j$46xd"` uKg8KɤɯӋ_RzÉR}OOΑ'KMʝKc׈UH4d(}Нh&<`ic" * [.0&T29p@&[ ..6JJ$%huQ9m(Nsb'"NOPw 2ڳ8ۅVwi< y$!7.F1)@ȁI\bxzQ$RIA/mU9QPJ6/\J2D­Nq  q H˦Z#*@Mr!ۤ<\$x0s2F;ڧro:'!a^$J6J VHK`rWmJhhl$ vxK7VƠ9́Fz7/_v9RUU^e|. &RH^z[$&y TRiݳ&[:gմ=#@>2DEp6H)Q)P^5$h_8˳؏unǩvr&sy< W0|h~9n9r6,ލ\>wzha̰]5fW8c| wT<9\,'w 9"(='3֨!-Uev8O3z/QYt/arۃXiOu!hJ^"!wҘ5kuDkՓib̩H,{O0kOp:nEirս:q. $s,6 HJ Sk y/J.8ۍȡ}L`*̶ "I)ɏҾ҂\]n4"Zגq1IĔk}J?CT.gd a+ԒeY/'$T.9k{éU U%:X*xY ǔ*PW)]Obstf{+ 3~\A-MW&(e؟kVg=ޢrz|]]u3:,: HW *0-lG eVA %:on yrrw . 3xYZgjQAD8lo]7*[тqn\w?Zn pDɎNSs߹[#/r[.5{P3Hn! )kl]=cݚ:wJ{;?]y6.s#7d<0_{u{Yۮm8 y[o2qPa ~Ĺ6Eۡu)ܵy5:Y>EĹ"v,mliC>uu@|od|߃KK-ǷO5BHJ5NqzX$WeraA P{88Mr猃Ǔ*{>"/}BE~eǠD)&.9c<DPȍ"r&=Ҭc&4`*-˕jB H6LgI@x$͸<51gB Y1A&vc(UiQ*yz'Dn6lxj+Wޞ!1KѨTk++>UH0|JRL>Det1yOrA1qʐ@0H,:&%mo9ǿ`ܻM',d`p/fpsW;38W,_$+rbnm5NĖZMUdU1B 4fLCq09g22 evbᓲn%[on| 7yAO?_pb(jreЀAQQ%6!(r€NTiŰeGS6!'*F$cSK>p4VU/MMdACɹ$oc`SOԞ,[0YSBMH\5\֪RLZ1 WdDf{2$/:b5dd.2 "BMDJ M݁y8yp&(X8x A Kϛ8qˈdD>X)kGO4O]Rk4% _C07.j,lQVL19,t)'-Ѫ`t.&n"~PWGUҘ:%0qqm#^Jv *Zl-MLUz-ȉOűpP<ć 6`xw/* )qGяOV0w'tu{4ޕsrKT4h}"ϻ#,I}S*^HO5_pvP@ AqK"%}UN3Tp[ (\8 n6k=9];r@Zt@V\ғ2v.wdU)<'R#RvZ2M.4ZiƨXptl49e)q2r ds&=춖A6WE2Q L+⢫Urv9֬ ] q09}'Ȣ?N4Y4Y"*^FdtRBc %B4BLKTޖic &{7եח׶_|\*[F Wu8i-?\"u\uջef9}r~r~u6~-7W.[<OtrPQWR_z)o]=CQ[_i1j}~wQH¶AU;],.tѠFU $ao:@S;-<͘gre%>?Jw ?ϟ>L\qqrv)e??=.W_K:#bM}.7հ֞a][jm\(L]ώkv]ZI_"A[q jۅ[tͫ3nt5VjƿP, 8G;еX;LwQGUXUmqͬ'纓}RԫEvD%dK 1`MC&2 & jӊjMdVd"L3h ~ӡprlW֊gzJWZ)!TCb6+XWV~U W+Ћ+ Soݝ!q'ZݡAх]gJ)g@9H~M;,6cޒZ%[V f2oм%WZUi߳ka/v~r zbGKTȬ+W~CV5+JhW,wI9v\J'\!cO1Z X*S턫}ĕsj#f|YPvzpZs,[I#Fuczk3md{f,6*O44ԫq:NCz0>Kp&2 ЎroDfz=4>KH,@\$E=v\JWpEQb3bJ+VezW -wY4Z;XpK) nh{*ZUjb8E#%[ cF5+kqY3*d]#WZXpCu:aǎ+V)' q^ ope5 r@jqkWU1UWWNzH׃ECb4+e+"FUNC\I畁&CVxsjBv+!r a7il+&^ުdJQ=OƼ1z2ӘS'N ;67ú9z HZ;l#]3f53:3v3Ujkv2sh  XNz+G\y!\\,r2k!;H%lz3u\=^ -w:݌~1UBغVhkJ52gz &\=A[ FJ XrcD7jq%#'fprjWvV[%LK\yIc |@'L #$-xa76QgpF]azcbHh&oN6M\M޾u_/؅c+EyImPl 1'YWrVlD(CU4|jC!ÞK1Y.4cسZc7YdaFlʂm;˦,VpEj%ȱU*pFHy M?ʵ+Vq*p2  \hdj$4+l\3bZ~"aT^0yT; <ˤF)_3[^j[L$Zn @M[lq/_V:\|o44kNM{PˋG[gɽ++zUv .Yt__4˗Fwj<ws, Q$_Q-` yY F& ic_ɒUNϥßw -֐"펕.R9~8#k !_k EJ_גp_}po>D2esM܋hBJ>+UuoGe/~[X>9U23p@|suSY)Y@g1z:<ilMp iӖ}8Yz=ͻd?ͱ%CIo[ds$oUn:[w݇}^ Ǒ6gji{4nE+evB/g>~KɛtQ*kzf&V9&gM`=eNYf(yskuP3GS&U5f֘B*Ue28N9WS5USgU}*=)N-"*V_Džf֗Z[W"D+3ɍ]ɤZ0D3bsRkPTA*VZQ5m0k۫N=hSts6~-6@.VTSԻsftܝZ1U]dlrԔ1"'ZjXBɌexhk#Mj!SR+ Lg}Չ!hQJc ,h2ѡL2:玥(d CȥaUΦ{^bD1fZґG u{鑤>]ܶ穴 RѦ85cFgd3 }vޜGVͪNTRJIIbU1=HGQvOp߬#1ɇ%j1)F$Zt%GYG7( sm 2&,H/*Mh-|.`5HQC_"$ڌϕ!jT&"vֺXl)6.C2+hlnƢ27% ̜#Out:ZnPB Bтw եfQ#6*1eƫP4\ H`9gbjb |4@sXTM+ Eey0P(g_T`$XLytp F_X$NCqaj YZW&+3\5˶ " ]Ó`cyc֣sVc3rBE(ǵz*%n (dJ)s'WV6n"R5Rs #  ڄW'`\h\`AV4:(576CidLBXnl&L6@@;-V(!Ȯh +6CA dܠѦ!P C='X !a@YP.34vҾSDUtFtѷl : 9 & Bl  ` 5R(pPgU ]U r QlC@?V!zoMtf%Ji7 ] Bi֙% (Ey@Q@PShHX|F)fVaj{VQR@X}Ji8zFȼBjM#HH/eJFieR":Z(k Pe1!ʰ&4fx_е?V8x4iZȠ ]' 8Cw@{=qYdFǘbcUC:iF%]ov=xbYί6v5l=s/x l/ L9hh"X DŽ1<*4Ey&)H!JdAh2BCTfnqIuflNX-ƹM;qt֙tibtd53+ioZP)J]q魪]> 2߬n$F ,/^q4U*qaV(m``3׸d esl«t]ݜWsLvVb S!\w&f=)[ZdCQN> ;,Vv ӚZSМ'CakBo1 PNf~;.2#dbaADy آsE5G?Tڍ mM4YURA  )aɔQHπOt0Cz;:t}V ƾ8B?߁uŠxMc'7 d#GPg= +UA Bah zr#>j4#PgpU[g=*c6*R U 6ǀ욁6<~M+qҡǚAĦUJmڸfX'2>2BCzzf7|*Mf>@M›΁ ֫l1xdi1`YVM@; Z&[Z =W&3 M F9`=KFD=ki0kqEp:q˱ %׆)XI2[1xxA4T.gTS.\;N8 "ˡf-七IБdpMԅ]0q$d Jkwmt i\]wTg0B>x cklq7{qzo sHG,IiIeޗksFnfl>9}ͭϷGb뒴u&^zŴ~Mw}Csu ˫«M >?Xk8vq׾roյ-c1PqWn}u!N%nyן"4,jm?uk>֛-v{9=9ц_+ѧοѰf||s>˳Nj{Զ c܍u]춊~mL꘡ TXSOI+1ɪ1\wa (=2Õp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J Wbz*N ϯn$>ո /\/pÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbzDp1Ɩ#2\h:ÕATpec#ho-p%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J Wb_pܘc2\c\Gc'hzZibÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J Wbp{ AެƎ1ꝋ0| _qfooX@OڰƠ#A.ha@랿? (oYv0K9ߝw }ah#zʨՋ+Zc  ]:]㠫ZRϜ(o50ς]=v5#Հ@sO +|TDWop2h.p;hó2 ]Hں3׷.gt9: q˱1{+! mntBSgO]7y9|-3|j⩊aasGoڞvؤU돻ܻXR+$^kn bB?<+eM򗋋W{׳vlz8]+?/ok4/M ?cXH[ﭵ*$-6?]\ 9\z;h?&gO?b,1[RcX>VQOZNd9LA9v%8ָI"~^c8ykAhM|v$-[:;"hj5t,t5O%?J'K+tiGCWoz-It5gOW% ]D⨍;f>$9hI=w(ztNPͩ۱flÿ6Z]\Kj4û^|H'xb]7_\u5K_*e*ugcoZ|͝6Nkˎ׵0iUkmH_e.x~?w9'{ ؋F?%)!mkX]%%zQ $8͞_WUpT|V{=39"_oz9k+߹+4l:Jۈ YH>Gˌ&7g^hĀ,">_PjJVԧ(ߠ||| z8/@MmBO˳⠪J؆}TxW34qp:"E%+Mref_:dE:b{4Fy55 7gjSs0'k..u:M}.7y6o`0lp/eS)7v {_W[re3m0Js0Ës|bNEDzAמp64Gt&99o{P|Qn 1 W.Yz'&KNiM- t˽<1vX_!3.xulm<-iAo';9&-%7O-^fG/- ךiPϫ@YWY(\ոYF^]Y`̝wR`mjѢʲ3l(SgoՌZ^hwڹG׮Zl׃zߚ5y8ƫxlR( 5?rXoL?#,ɕ&GF* 9~Ks k~O̾ kXĘٖpWcY^GIU'ka}ְ:6kX5, f0.U16&>c 6,` Sk.rZÝs_knf|OAH%Ka3.D F"EYia"w (}7.hQnORl-z_ )®{0^1AZҨUNy5 us9nNe opRott5v5z5wgCOO c1GÁNÁ|`s:xI#{"Oׯ7 Q8༴rxT'W iRi6'998zt(p8p̲׀**Cb{%bBOi֔K%=g<9y<9no#O0K{1(=VGw z eǝER2Mh"7*f ̙e >5 S:.:v([P,cPK3>U^'w'+z~Y+|9S$/;sawn; pb'kec9E!hTT*$+]WUd\n9=1&`btֳR49  ,tU:8P,4c>abidz{:*Lo~v.iA0ekI*{NX@]! 27qbHfns = 936`cXll;blf&aUƎAs "g7b .ǂڝqǡm;Fm{B5 NxrBd$%c2ϥ<%^"g6DXڭA=r" ͐ Γ3A8$(Mi!vFn<4bB`<D?DDJEv"q"zRBv樣J6&ĝ9AA$Uё uFlcKmVˈ3>`DRIቧ ,i&4w|#bgFď ^\fKKE1..pqsAYX1Yhqf3^8Cr'h| ;CPt0<|0Y(NYя_ܠ} YkpM+U7.Zix{~6׬¦>&A]\0W` OD1eb;:#g ϱwnMʎô|{~yGٸ9KsxgP8{\_2ϰԂƸρz0i 'JD9QN(ܲ N &RI12.^D9DsI34gm721hMH)0P-2W!FS&bA+ FH@8c wjOH1p.t]3rvGB9ߜȌC|#~xӭf0Y|<,O:Y `c@9I2欧N`†'H 6 bґt<:g#珆w'f,B%0sz]ܝ3_8OBAXWU 2х.umng'#} Gnݟ.U0MN0p?$4'n fʨ%YM(8 :+(qN0稓ܫ~na/3%*&뛗7_3b%QEQ~,Yp3,J1Y.zwIJp".RW(r)H evx{rS3nbEZ83I~/.}\~\uk ˆ}r)]YGEqN/7)[U3Bڇv՛{lLT&u(W5FpkŽ­aR; C譟: ne*mkmWbu5m/.l[9e CzcXϩNǰ+l٫a,yKg TV_m)ybZaΗb+XYm.7V߾.ƪY|3k#5uD2*C|wUmcs[w,PbT]?2 S6Kq?`4}hEm|dڼYJګ9 `LOxq6i -w{-M`/d#]"NC]-ͶTgukwz$e&jN>GV0lRN;|WshMc蜫?w\A48󉈨(~̓1h"XΘ VH9@C d:::]ơԑV/^$ϜE)'L4܄=@\L4KhXUKH.aX SՅ\@Ux6Df[w AumB6ȕBt名bIƬS2FpX2m|05u)QHBqs.Ia{SĮvFΞ6Mhq+ KqGHI ao#7쵗k [ڰ|ײ2¦3rC|ɉsJ:ues( 7n;혗Q'"&s~!>lp%^ПVζ= \JLk3ӂtֆ ' hJ hr Iq}u=TkƽJQ0 U=DG.31KVjTqiI/s.A( ȅE=WJs ^LM#r-yu XQ}QmI!Ψc2H9ag#rv甇3WUoϪ Y۲6iINgŏ! L,,(u N\f[X^xW$W`RgQu9_\1Hkt[\oX%XofHtqq>/TrJ.Z}gaQpr(?8c {5[R5}Y"偊yLϋydTy6jˋ(iCɅ(bT&Xȑ;6+U48ye#8<>[q\R~ - UUvrgjQEE$K󤔁c=Q,"}oښ jӁtZt>8{)//Ld\59Į.> UDTs}Ŭ`H̱0XP"E06Lm)Dŷk7y@cPYQ(ڂN2!zdz .aӥpW-:ZI笑JV6CY8^>OjE"ei}n輚4C My[Ss'>C?f >M#Jul.ߏ`o2rg٧ɈL߹8^O7tw? & '{wdqz@qѭYA3w^|䫗1tOf~oؗD5*2u20BU;3VU}IQ F]Ho=%𾎁frC\y!K,qՈX ܤ,|nxn i2񏋋QHB]66bx1O$O@Z\݁ )JK ǧzWVw+FJ3q:} C4zRlzTl:*zޮIgy!zBV[SvS)7hWF9pvZ537 CYm_C~2:qe$вtyG hlr_f K d*KBkJVINZmGU!2);NWt,TֻAlɞC*%wG<+q3MYW6hW=llS=11K`a99{)".2NoJƫXe0Agnױ*9 mryvC) j}ݝuϞ-a\C&"Qg2` %/[mD\Fy;B`=g;@?psol3OP=]lY *a gEmdw#Qܗpуɱ#iDq'h饜<>:GԴ e/Y R4aoDnSG !|-'Di]f/@]W"bPF` )\޻qy1qt'-@;'VqDC#̡{>8NߞaߘZUUBmDV7P,d6j4,.z+r["8m Nwޞ ͘4<(}%rVooDpS~C-:t_9 ȧ5"q߿b 11 3LxpFH΃Ѝ즲hѯEh'ʢڲWG0lL"^縕q;)AKI_aHU="6GWg$i蚿g|{w>"Gu(:CBDg9&Ds6ldIR( Rj2\, _K8$ؠρZ\^Ji #EB!"`(!1gh4Cƫ/a<,qHtmZg<\פĸ눊5h5js9KFm}/֨\(i/XC8υ1vJ9XNӮ̦/oo{mS2 ,e5"~:y޳u=B(I^P>M2k9YαDyꒋQDu4)tN-Q1t͑EDgMAF;su*sOQ3E5!]'[fj}v>ζ.sIңxK㮇q|)Em͗R6_K|)/e'^Epj>͗R6_K|)/el>Ll`R6_K|)/eljKH唕Gd0Nm.c1ݩռ; 4A)Zpz# Rmul i(,Y\5cp15agPm#GE/{Y|glf2̗,#(E[YJvb[o~XIHlIM쪇UbM%hl1m`drqDked+.yۭ +B/Ξ^gr|\?G5J "D \±HeC Y+ĚdG11ԔQ#sWx?XEXuT9IgI:gTҵq{RCZ-fDH1:S%۔U2FeRZfOD(H@,9Wc]fu-$e9DqF#,AjYJpB4O/H ҃ t"PKjsaA12 4u +.[xwVei̎y 7rh0+Wjb8sthp1 8}룡BSb$gY#Q&{ mWJ),%Ia$O }4sRԳ>d!AL׈rPg3:˃9B Y% 6D`%*Ymyǫ;-Ħ86wtiq%a2kLIQ.=[9<o9uU.orӨfըsGKf&T1~Aݸ:L[{p;W&vyoΆA0;_^~_??~ҿyo^W? S;ƛHP$Qxn ?/y蠠ybo{k5,iƵ׋ߏټn˲AYZh׫SA^~fq;M gٰB.Ćޚ5BgaGvl~tof;I>i _y-;R^yFaL*k\fFf-ST5Ko'}j\Vz.}/^MR'Bga1h)m^"}Nd}Nux{W<oΗQ+j}ӫժ>y" u&tgfY:yG<&Y&yt6&-̪(dLߴ.jgcZ2Ve12] A5*T}G&xq_@e6I,PC<ևzzsYL2;LZw_2g?O:R0!D5MԵ8x!dF_\cOkxk)EhRRtDL)x(Y+IxWc7w[mγzTL֝#|8Ӝ}g颼xXOCF{M$%S.TsziH4^jRT2Ee#%&a-blU1&q MAB6!55͂QfUWŜ0Z"2TɀLKPL.J۱z9$H vS!P|dQSVNe ?,.Ij-eBr)w O-'('M}!0irvy4x}jwِ]4l TM9=Q8ũUU ):gSm5 Jk-πGhB5:EY ӒN'ڒ(f3Vr}= {w*+J&Az ϳf*dg< J7wPd TDb:P XA(AzxXa=$UOcUڰ-Vm{f~;e&“f|z16FҜ]_֧5<'%o(T~fC9V\?e#KvoŵJڬlo3fko=xSRLܺͼy:dz{:]'0nͶCnpfwmzon|狔;|v~N&[[^ݾ]xy5ϿzO󅮙WzKۊ5lkrzOm{um۽m t~L66ហuOv37]hOa&[k=9Eqz_nYcj=h+MB&ڊ"*(&*!;>d{ ȨҭGw|L&ThJUfNRV̜2{+ejY٩.Ț,K@ L_2LHG'QksNԱvFJӶ24wڇMɃ7]J% s}*3)**ِS%{ &[z&@oțY^nw%Ǘ Ed7.eFʾ'DKYg|} AA J%$H:)M.kwTrJMi6ಃnxпjLQT MR$l6C )UF kMII:!0hI{`A{}# ښJ8ᔁ"3|ސтl u3:p(r֎Pe]MF©e.`4dA 98#9|>w3'ǵ Y+vp l^ 6]G}C(͠CI!CHD&#%끍(06k'8γ=tDNGOB!FWd< OU1*ߨ3#g/FDQ P̂\s|B9Y)-_X^;=,GB ubt8Te>>J)ڲ \&8۝Ȏ.@ ."@xc!'kIĐZ7gJF9B_񇄦ERAS&;~@cN28@⋫ \B@Шh_>6ڲ\/gk3o/}#D0p~1mYj;FX+`4qveQB ~W k_?'?fPaʴyr>;ɯzUߦ]Ю-8sk "melT[}>g 'OP➉]vg*_qS@mի.Ž'3hZj=Tۚo)Joaq3ox;&=>)٤*Ğ|Ᾱްlf"Ky"PDj6R\^]%Ջ|?MBXPʔaa 'Am 5c0"2 E'l[\ ? X7,ļH[dNRX0ꔦpXQ 1#2N5"xOt6jMyxcۂc%%s [/x-G)uյL[BynVUFI%VT'rm$ZR*@Py? Epw=>ewRn:?/nv7>.SjV:yg)^]ay{[jťzGU&ke-e`QL@yM>jb w}Zh̬IuǢ |*yl<~6+q" mzqܘ]gx;qVi9sWZuhN }_i,+@Z(ڜ0>) ow1__6.a^aŧq+l3K-xClCz Y@SJx-8UV]kuPen Pކ?L&>\A6E뾄ga )Z{e~00x%\vBp?9N-f ZJ\hW\ûʶs 1dR0|n˸KX%oi7*4p1+р&kd9 려I\gn}|E n3c* -UVi# B e XV+|Lͽ{,K cUWre,Ta(T2yFfg~@k f* }8){Mė ΔF2Fq']1,4xM^qJDjv;yg̀F@QAsgтA pXTѡ[6F+x p85Is;>0}1L_yY ʥfr=c"CI9&j9o{1Q)Dr SJrlJkq0JRD-i=B"QI;q9+Pu`U"WCW@@*Q;@g)FS鵓"XaO[ɂࣔ<ٳ>Eݞ%7in}ہaÒ]#|#(/p}h7ߔ/~7_,J "yӍeKӣ]ł5x)`-9IbJɻKB.fϥ\d^`:ze<4j `*Xj{XP}vCecy78< Jc0_O&ID5_rGƯG*xnQ 1<eZ5xW[5Ƙ14ɪdD|mF -/6Y/_zͺO(ۚ4„A;E+d:. ct W9`!BߏX5`JA |oh$7c? ^rÜ9Q [cI"|!:u}|٠ Y5oa"h)e ɝS ^t7) (8ߨ}F9DŽRn&x92J#"(x9lV`6EbD hl&fcd0:<}"N>yD]3kNPxls.ḇ|:cipް6$Qs-9˝Rb0 |OKKaG[t骬tUVrϺ* VVFv3yBfut]{@=u8 FaǛ>pV˼%!AhE<(格F{ϼ'%}{[ފz_wFv+ԌZgM.zr&Mp E9b,peJ1 ;%E!!a]|¥&LI8 RiǬQFYJy>%7 V=ڑg #gCa!M[AZȕ܆h̗!ϺpƛUv;Ī+B?tޚײrG3&U9Eta$aIy`Q͔HC,2X:5_u8lEt1 HiLv2Ja wZIA``!AyZq `TO>-V܋ul7>h%!kHa%seīũHiN5=jd!-R>kV>xDMĞbCT[EV-fR\wH.HJY* 6A)wsNՈaMnt\z bA8_X|aن~Ȇva:"Ґ>wL((_8^ߗ!"ɻl6H l@$IEb?@(-ȸI=L_˯"aooG_KE]v M>cX32E;6Iq:7 gE-6.[I Ȃ\ sELrpq( e n1; AdPx)I5K0fWgu(]Z2j!_hJKP!a|~cJEXl噪^?0O7ʼneul8K d- {R}3K`iy>4!T7D#1~aH0a-,DBF YL|8_'zr3f7;=*A:ȺQ랕sInFz40z8.j& b"/1Άe {S^>Ǎ*Q0GlXnW%b7\u!QMJwW! l :tCQ*t>ƽ>҃z?u!QAx$[id{3HPpgɴ\3,MhOz\!'HՃ6=2C0R=,jgE[EIDGwAZ:u&a:ȾG≕X0Ұ)(epN˫,ʵ[@0PHx3jg>sbaJJ]ѷer'P-V]NZ~ʌ$;tc},5dmL33L= ~O~05`94Wk3mneH{"]TBg1#M"PYi1igFFGu Tj^ˮL#g@B+M hvڤU$O{뮹4j|t&3DUڸ>m0iG; :gsJ\s,|b_ly)f^%Ej >X5Qrgtsi}.b(֢fpT7.]@^P1KQ opPeTQO3l뼾aWЕv|h{F]_3mDŽp0Q&fztc4Pmͤ]<9ou.t3&tdGjdwi{no!6f{׹Om4}[w5Yl]asi{tG6;2@Ѭ;ĖZmݿmEƻ;o UJ07?ycKXo.q Wz mǟ7ݗ,[lSy \ն"mr0J qЄy #W|G|,]8' oT:VFHS\Q(e.}߅ZX#,%BUj2 SI$qՒARQ-8/m; >2Tz|ǧjdvGɓir&ڬZK:OÐrТ9QL Q$+DK=L09H)E/lmboni<\w:MM^7=]R|ٰܲ _UĢ{W&D 6-!SF I7|JVL:MuoRBrKĀ"D*$h1蔸Uœ̢qZK#c1qgRW)4P,ԅPXxT,\3͞f{.@O~v5AeƟWؠrNR QU4m$RGV.؏-MZydm'{j%f]ԒIP|Xg5ͻ_mKu?<ݐ#¡Dsg=S4]S HYqT18+\),=G7$ArG]2Aq>C脥X<AV!N@MNlvbBL{'B( r))9ӞSv!ں/5ZN,DzM^(&Ξ􉳾‰zR:cuF 7y[,o> 1@;Ǚx2Ӫ;em}cײAB$h 2;zʒBC60"8Vh- p#Q!r=05: Rq.A8FQL@RR:8-"%N D)]N@Q`>b"r*55`ʹ8ܠ9P3,K'-&n`z،xywޅJ=v/rwI3v`O%ʹ%ly]9}jO +n7!%gFBZF grĄ%=dvz鄞y6"E-#2@s 5Zł: ;s*y Ov׭zKC:ZFWoF8y+O$4|LIBhe`?E)0[}1?z'd6Qq.n| Y_*ـF_v'$_9CLI T0K3@e] jtHQS\KV'B5a0~EfrY]*U.xjpX UTSk dğOwζ|Χՙ( 'vp1"o"wO5o%%}WGuMg7\F–?-_{Cc7?Mg>6[m[zz0q6\`ӕϣzf]tJaQ|nk}.SW8-?-VWM9Wގ2VL\3l@\WCrLyRԆ0SMyގ߇0p_Ȯ9&W\{A/I~띶}i3t8N`8۫Azpqm]vO9=*Um\gwp~fwkN/, g'8%ݎ|βĊg ʵZ}Ae]NqShjc>Lk>\9j[h[|9>72Fx&h*Y|~ѷt?DW{P3k==0ѿ|ŝ߈G+X ]ry`}C{WZ>iDg@I b=`K{h 黺KKVORH8U]%˜4=3QzJ{xx*{۞Pb1}m[fzeG]\αzEks Z'ol"o;h;{]tm~i&mɢ {e,3ѺSvI~eQ8 Qb_?ty]9+Tݴs̈́R'9ȸ ]T89DtԼDK O^q6R e> %R4K|Ej_JڋOwUP.% ` (HsFVƆT)O!k nIM(=:FEZ\uH)%xc(R ,򄻠 1\T/ >i@.'uu:efjOH&2)!W<8̰] 3'4\`jڇ}׫o_GFϣ+˾]ElўVg6sږPWW}[׊ּjD[{Xݓmi1LŦD+iWhǃQuqsew+yl+dP+}nF3ɓhNA6V Y!X-zV !.!BiOItt|Ni;r&yQ[o畮Zs;l"y~7fjbY/#k$`^ FMA FWP*0*kހHY :᭶&9 "NтR(̢Q%OJ*&n g}v<gkUsogE|֢^!6r)`A".?B4˟tTK'CR%(YK8e/oma: tHLXZYgy,l yoTN8PHb uD^z#bN N@Hd"`T2>o?c(,փ3+a `uj٣@:% J"xj5ut켔N{y$|N;w }99 yGGÕxDs-@_?{Ƒ)!@Xdll3bH-,y%DC d3uUu=N9z^!öN@gXWO" qV ʐhMy]W̫b^ՅZ+ =5ڬV~ik~Wtf!J(:fB'q"E3 iVu5AsUZ7 /LLu[:lWٮUryZRM$ _ƓeCȪq Gӣ2O0rzG5IG?v;59?A?O=\''\lG-.a rQP&ʊ)=Rb^ gjŸ0ƎNJUeEr o @GJB[m{@Wr faC,BsdUۧEPpV~Rf\f7ex2%rӟLѸ=eprs 1NFSR!(gWO8b*\v>J2v(p{WJ;zp EW?!{ʩXWdgsdIOZ NO_^eḼj2=2,N(|}{\ә`NAQYnN-gDu%¶~x`v0ד+)*u"| Ԍ a4KB" RJRmIU]۫CZB*oT._ݛӵDmf~ݽ :7}+*K/Q LԪ'(u{A[@ RoBzUPDVN}-LKu@p8GX \%r:J2ptprsŅ< A<J\%jwJTj ~+Lu$rU}D贫WR0!hX \%rD$jwJTJ +PW`\U}D%E•r_ /k'E^iUb31<%\,>JɟU#E g_7kׁE #u5{ZT,Һg<2Sˉ=sM1]<B4m(K9=!^{'D@V{#!.5Q*ϰPSb%ZsFED,^xYlGU2h?F&q8q7?{3JNIQw-WEyG:Ϧ3欧=TϷu: m&oވmϗê6Yt%JT " Xд (N;#'D}kZ&Υ Ąsv.O}'}`U}){6ooḱo}wx\{tW.tI,qTF- ޙFC. myt3uw+6gXm#ʘyn)HR1J +0HJF?aR c``5w^7wOs\" z;{4PW{V㉁'`4 \1ZN".<3a DO;3/}=!Ϸ<ǠS"a0K͉Ħ *Вz-FY2y* pb @ό/FX y8ȃ 3pGRGSXP@"J3؏l5Ӯ{{RtH a]T-dv] !)C(mˮo&nL"[җ$Mw |>7Ft0xߤqki1^:AxR_~,ĩc*$24U`>f P}Xݚ"X)}?GK8^g2uy"1+|1Q7ۥyXؚƾ!xz?K#5E$+n\Vg@Ƹ4.n" CEћFTbu )GH'EHMk ˈyMCEmJ}KCW]nPҁ@\jT)̦\Q=P@FH wKf/!69-E{*w084m>׾{5I=ɠ?wLSz{&Ur$T|=UhoSm1ݓ p3`]ݸcܠ߳AG,Pu=Mx`0ǯ'ՈIK6Qy1D2=|5a1ģy]W˜hûyo6޼FON(%1)P Bd.P k8  fF7 a5:z.~E aE tt wtvÂ+(CL['`NRyouJSB8Hxz̈]v@/q<'7[_XwwkFk(]߀x52lQ/{bWb3R=yi\תȾu:ܻ|'s}O &Lo]?:Yx(eK^D}Yb^g8%S.)Wӫ< )˙ ;\ߗu6ke-e`QL@yM>jb w蹫I.Ff<,#)ӘRtv2Ja;$8 (O xtk/{< {,t ⹅ul7>h%!kHa%seī` 0&? T[Z%"H@e^5u oHԄI)6KLS@]A$~h#]d(DAd:A)wsNՈaMS݀b.خ_ OB6S?cBE" |#l2H l(~b@ϱdtӠII=埣Js)/TWG= +0SRt|!c^cp{յo%P0 8+TO!JtH\ L }w^$牋`\V6Rlןmvc V V@(@ n`zU$eZah L*8ɏ3Uwgɽn_+o MƗg]b%slqmqY-ճs ~ ٥L* NBM3)~+gb|LMӐi(3D`L7㇃zٜstyV~ȦYs`yrZF47`$Y8;8;?(/gs _pٛߧ;ǻ_ߞsL?v? ḉJ&ᗭP| L?g?=azjSaA O]-+ؘ鼅aw=`dJ?f?M~7oKƷuHaqPtOw/NII9𙯿d]M_y!Vs"Vy3w׾:f@K`c|ebGIdDp;^⑒H:oQ) A`Nr+i`CxF)ZPK?Ŷ9$zs%wXY;P3;*{1$$cv Ψ(k]B 6H_8*~tQN&Vd<|?O' !ݛsںvTٚa rQP&ʊ68BΊ q߅$kAQ|>Hh)Eg^\{eyUµ>6\`Cv:ɪŷOnp5 U_nd4K= eeIE ]F_g-SRoFVYi=^a~z͇W.TU)nqݻE10 b9{[݋2Ҋc[Mڴt]ڤZ΃'!,>!2A0ɵ r B!gh՞2w5H>e?d]`v{N_4Ԓ\j`%Y]d˔vgvL92|/|F3v @wV +ĸ>Iه$ M?>&jVJis\,9*pqqlɫ*22^OTdToj#}M8=J{;眯ń1~n]=쾫*~`Lw=KԜѸ#-~Hm; %ttA' 8Hcj$Ă`mQj[b!K$T. x2lbJdKT>d\U s%yt^R)C('QL:`/Z(E9 h*Es|̓#tw'B3V6RNVfW^Znۑ&90z<l 1#! @zs6%?%HU:2w)BVS2`Lfg/M줤BN9$rDd0jSvcQŝ'FY@F^Bl[:msTUC]ROwܾmuls@~\}mX_z7 n$yqnm B <4NlKt@i 2m2m| ΜicF4^M|o&}k ƃWfq}X:?kz =JA?pB"K'0Ra2"'QUPA A!aʾ'dh!;Fs|;9r * JgI9LxplΡWa]dSGZX/6 {@16P_L2B:XrN6ZbcGmfqG͟nΰ46l6w<_m%ӆfOO@MM}E!h!20F0h&!ȀOLv&D!Z4kΖJn_bɡ'G Jd34{#c3s! 4FƩ5vgwVsW_>lG&?q9^:b#6*VE{$FTTڒɠ$`pH+ڪ$+P^-9Q!H%kThJlan'/R=ֵ.*neC1So{ N%F Di# ٹ*9eVTAB  GC4訲X(t8JPH&EdK3nx̜pE꫄|r }ljDۈ!"qψ/PBX_yg:JuscQxvwIZHc>.!jhƙBt:c*LZjȂ[63xDQʥ>:SjaC\<4U(&/8Pvh )(@! d"Gr爋}FǩP7xxְM,2CEpC+U?jEx64OţjzevK* _IbqŎgJ[P Yo^u Hʗ(tH HK~ K*EM[dpa,lYgؽ(x;vWDv@ZPhѾWK6{ A.ソIi6vnH Q@|v'sm; /}= ٠T4$ Y)pS7)AV۪;9hpM3RˠV.j(\؎2z@-zGIZfr*KpDEfҵSu}g$]N= ]'ڲ?*uژړp8&x)Ge&hhDc%}L TSw <؟oA!޵uH^;S 6B+w~^;ӬuS?Jg՝<;S2}jVZlrCRfb*;T:]$Y0*gWC g]W=ZWvHa"d:OĢYV"&$p&=әP'iCj%^`8iIi7N|ԁɘ42HL$/RNydzhMjɈ4]lǏOvm120(Q"xʭ+6`E,]N@6J4mWMpD.k-"Iȩ$nme dpIuAIWٛ4.8fC$MڤTIz#pUZ^ke#ʷq,r,nS4_PwkSVKb F=wze xY篴͖W0&kHGtNuDى¿F]UA_̿t 2 c.`I%(Q!4 O~{}.Bh(l-iLR)CQyIcB-E]P3"B67?VϨ:̜~Pst»x ߟܲQr}h9ߞ #>JQ=}퐿Zƫ ~I [iR)eaÜZ񃹾Nrôݳ 񅮋b:7״RYխ}s&h]6'\0*[77q ¿hY#ah6dBo|hCp6NB42/lٙ.|s?{g720ّHl7AmDo+nSőƴHd^[ ~{m)v'X[`)toقGE9d" pD'u1Yizs ]P3ҕJqEWBM]WJ` u#|tJhL~R)!,-yJg|ޗ( 誝>9y])eX^QWR9EW L&]'Pf9F+ N]WJPUY=AƎF3ЂGW(;Ujߢ+wA~j%⢫ `3ҕsJqF˓ו5fu '4cJ;siBt>OpF`hZq~#:y6T_4=CMљ'^qtTv!%Uu?^^t1)c/Iz_*ļ21;h,c Mڶ]+P*Ú άSq˲ <8aY>kzے"8ƬXZU֓lRS9:,< \`m#86O][3ҕlt+qh¢:d+&Jq+ 0u])e+vݡɭ2YpUPӍևAo?ePC6Pϙm"-3s]).\t[7u])EWsԕuԓ[IecieJ#$&KĞKh86\NTXj))3@5ztl>0ViN=UJ^޺ct%<-p=+%y u)` *n]Ƨ>RJfe+)R\ǹJiiRJ^fQW=d+~uԕ0qcWh|ճ.uKѕr%z9EGטv~ džᎭAݽqUW(6][xHW &f+ŵ6])3SוRzjHS`}%u./.t\ȗo~7WG?]H.{[~:x2b+R\sn]We2u47?Tc糂6{>\ȿ]\\w6/7UlBEl!W_cfʐhޗzDKd6J- CtyNe)Uv}mdԥ|>ݯw.{>lsVv[vb_t ,]o_RM;En,ǟ1oӶӫϼ{κ;%F{6o?VJTb. nvo*Pg'j&!FRoVRunڇWsxuQh{ZNԁWAO'c;skJ46F'6UOU r7]m^W&I>^+?;1uZ,c&Ӓ?x8j6Lm-Q|+8slN+?K}-O?Xʟ5'kՐh64߭&W; [Z&hc\}iuo7$_~ ØOmBJ:4 ,),MחJ5t6.k>5l ?^P)x}.WK R#O{E'9uGcx$':&m-}ƘZ"ArOkl?t5YVt\kN}2}ۚz]+mnWۜכh#T<I3 f5=I0vG^ht?HH[1 l@/oM~kJ+U%ȦؒΈ}g6}>*])7SוR\t5C]J\6R\\t%ƩJ)=,S=OʮѬF#c01\?htNaṿV|0יa]\ 9Z8JAs*D\لJx!RYBȎ~F`T\JhSוRrXt5C]yf9Jf+%EWJ˓ .@9F`|e])mpSוR[t5C]EdsѕFpJiN]WJFQd+6Ot%L~Re6ѕYq7}0\08 J2: ]EW=tJd+csѕZq ueQsҕCJp\t.N]WJ9EWϣ+3&Jq-f+k0JEWs՛/gK_V.TYm~TS,&ڬfh$]snw޾۷oo+?H{sZI$Bm.k{֬{wBџJg5-?~KeIi)1g$*o.l/6iKP>WLͯ;n{9e1nW0_5WXym^|[kW拉\oŽڤf>HU+ּ0UWUHTTI|v|{-;4+aZ##oY2{ tB89vy.ϟߧ_uE%pWBuUx}Q^?#l\Kև$ϿM,!D 5 1A(!_Hz տJ ګk܏]QndHA]\OΫ;\m04bWǦ vd NmE+B!<[}*m FSqA YD42S*ʖ \C Rvwe6xHޯjCYUTR! K[R5Ԙꪔ4''bw:`ltᐴTשhb뺒R¢*0J2* 5EbL@ ;i%k#2ZˋRKM}Y5rm|-౐H$Qt .4&jTT=CUҘkHme`,J4U!WԑH(@̎jG"ȅ5 HD!$]mYd+& EHQa>I~BEYKAK򘪦ԔPr$FߚX7Z=b4Aj@шgnOio.OSYd)lQRڔHdͅB,E!2(h'DTU@.lT7=L-]I6 r4M"H@?\-p<KH&5HIjt)q֥'ЭƥciJvѲ5ZXٻݶ,+_;Ǫ@0h3A% Z\ʁA[:*]:H BРM 23[!(Q(ʨq{A*x35# 4|AI[m(^iRW.{/^Z#QYHI(e+%gT0[QЯz ($&9%J,{)E@Bi5f UV+$1^ B5Vczh2&'2ЙyoH2Uthw1#/Ef̺(NcDBm^Us7/ |l5tvLZp v`c{{Y. AAZI|@ƛԁFqndYt4كҕjIJ%UA 0 CMˌ,$tF^8g Z Tx" D&jZ5`A>xmBV5$ U?XG4'ۗUe 2inu[@q;*q /],TɂN5~T}%y E*Ɲ("Ӡ&k)$ȝL 7?ߛCwu4SU)VXB(;KSvhC*Kt]/sgmET2kEz(%G6ܖS5 Y$qNJ@AK^Bk!3ڄ`ctGJ߳OɰzL8-:R&j ` wP7ۭIGOB=V`&! DeVۆۚ -)DzIIdpk6vP' w~7l|QiFI*@aPB^:(5\Q͡ xy܈BD{=D+Y w]Pz@ AjЁFQ-@OOdPL;PqMoܪGbb z#VK%&SK(\s{%I [0p0) Hh X(#_ց,7W QYjЏQ348)pc#7 H 5j+M3| R5zfҼLFLҠ@Zv!;9y:-Qǝ v)ćP=xZ*wָ`m >:jNa¦g@ BX=rrƇJFD9TkBze(6⊞4 p@7^#@.6FTS.XYMH!b!C1 `vH20D. q$* r6R =];S@]pBk^>\gz)jm&\}^3W\q_vRyEHm$PDn0mBw^i][Zκy+FAr˫~럣F.;kAz+{~[[Ƅ(x隹1Nf? 7l—trB olvΜ[?{ˏ/_\\c:qXJo˳H*RFh#P6ΗK+-O{*+t (ڶ9c[NާcqdTq֧0(S%F+dB^ኦpeYlzF+pņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbp Wp Wy F WHap WѢܳ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\= fpo r0h~\.%0,h2i節땧{3vjT&rQ@vEw,FD5S%^c@'A;/}vvmhFV1QJь5(J'{=GVRD6ьfl4chF36ьfl4chF36ьfl4chF36ьfl4chF36ьfl4chF36ьfl4chFG5ie[d,+dagVu6N8d2՜֗MXi}>Kyw~<_]tջPS{7;~?;{xP q܂qz}A7pm9|0/o7Hj!NBnx{|vm#+x7Yo';߿2hbv;D1 k PlxPwGwqm{&?|bEɃI&Ye8UO{QU9UcTE #UιlT}FFUCs6lTe*U٨FU6QlTe*U٨FU6QlTe*U٨FU6QlTe*U٨FU6QlTe*U٨FU6QlTe*U٨FU6QUt08}Q[^:y)V\ mð-|]ZaR(ുu!:{00,͓˖aY"0kġEkS+1\=G)sH  㾧hgW2\=rNi(++9hzpEQp LG6BZ+ W hk?(J{=G pbp<BA=+0zp=mq@pm<vE[2p(pE'/)JsWҷzR-;pup7^ph7Ew+pu߮V'n{s0pEx(phO(ʝ WTDϛ+ +P њoLnQ*p,*m%~\GM* rv:yc:X-<^\ DةS'oϯgG+ :g;H9(5vT˗ꅃ~IS ki$r7!R|v!v&Kz fcz16[ /f}=)Voey誩hQ o-ξ w|eۛ=!&MLq6Gt:9Cʾ`QaO鶱vm\]]z~Gn޵eKOqv|j.P$TFY݄Kk=,''w<Mc~;bVW 'JWcecr:9ձ#S>zg~9f՝68B}oD]lzt|R9uӱ[-6]=mkȥӒ3U䴗%tꄏNڗCIA9 'h{.:.k,A]#.*ק]\Ǝ~uK_ވ+c~?;9vL{G/oR$e@q_Ŷz7oOwܝ~מ[K8a]!5CJ;8A>Zj(5\-cюlF c7-UVy<%[,bTY!]qʞJ2y)!))evitVUu:$`y2oe?K5={3[,Dʨ^J^/r{v}|vw.VuO( )kS Ej*ajSAE^*Qj΁BLB~O| @͗IF}i(KmśaGwvJٻiwEA9{;ͣvUhj_{fW'~QdĎL;2abktuN؞j7Xk_/s~q10!LG6ӁZs863hҺv間>zUbg^S=eWiBh" Rr3:z}k趤"s$hcU1N-Ԍͤs@z2hWW%6ӆ7wO}GVm ^7'WG|ŀ"{KPaCR.zl Mg>a^ǹ!:9-"X}#v][}qel3]~1Ztb2A8hJByE[Y˱ZlBV:R< SU4:TJ.HhQzNgQs->QtJƷ.JЉg*HRYo,-4SִZ+#zWhܬ倉szWp+xc6G+u_~ז` ;qpC_!1TctɣђĮe(v-QFlE_6Yq] /ך~2)so3cd;{noÎ\Hz$ sƔ3FK T+-eV5cG*bOe|j+uiby׳)[M2:RL-P"ZjxFBk#GecM7"-[.(T܌{1ɳGG?5i;_!Ùd{ Y][o#=r+~=fR$$dM^e֎FR$_ O%ْd3`GjQlb]"8%>)b{33k C1UG5]slV'PMkit],1 {Mmxb[mexQv`~* ltUv5ml+x|ƉҠm-'l8" :Ct5hԝ}cҠ;N*tPr, mTсķ$J&=n 2sJH;vf}JU;cf~B9uCE ]l\D46'˄S6A! ƂgB59G ąλhAA("{VmM-AzEĉm5X`t}хL(ux.Ӽy#u]֊Z9,6H| zi#·A i- x1{!$ DZ*8^K0r'3~UzD \傰.E%f eF)U,k %JS/!`"vW0=zB} 2[{͂V@pU"-h?<%b۽ 烨vWmsB=7c d-d?{ΔVYUOݣVw-RtDnSw=: LjcZ>FQ^?].-o ˟xiuڨM8xZ?^_ ]W{ZW˗O$8G}zgߓowSdl~OO&+Š_ Rxx?z\PN:@6ѭ[@>R)Fr}-J"hq3lMps183Bј+Ӑk&I LQ'徾c9S}2g؟(s2)z2xƓ\Q KXc(-H%4C&H2͎@/ 6Y* :*RD+{תsD%.y7gk q?NZ7_6N÷/_bG]aYUbQY&0KIbQ|џz $4իzUB߫Wl^ VR-{-N,xR1@d Fz@Ȝ =IUlR5wqz8=DVioK 5c)‚F0.6*6: ,0KuB[/-,ǔa MgFD{e38ul:6uQx'$IB X!SF!3 i˜v/ Uш~n _K 2hJ43g^i=ULXkʴh)ǓsI'tBR"xJd|c )] uu^F1 dtK)2ÍeStYE[uP%z׾|0i2^fI(C2(yPA£dp9rlбX<8#4^|1ly-z؋8۾4˗Ӵ1KTW0'I&54 dn }㕷M 8?GsUo3D5R+kRdB,EYH(c"$jeJXkqj~cTs ⛻ :vIPб }׏/唰C~~_WӶS}?PzX'e4u1w%/%\$$eY#!YT xxS=T4 wxm|@M<*srLV T, nOopڞ[{q^C! iY?EAob~ғ~1}K&K{hʶF'C:*8:8=7zzD/U?$%u§<ܻl| h*DXRqc, C徠:% 3ހM)>)" "Gp$*zՂ"&Αӽ vUYDk)q! ͕L,2%sRr^Bpd OZߗ?oa2I^ewlDNa6v1\z*4˲9TM_^]]L26y,6mBx8bCiC])~m&)vGl.~#2㧋b5-ѥœ%q(hM-Db>pa_RDRɤX^_xµYVĔbA5JtPdcVQCyyDB̆X) YeFTfkoG=ٮCߗaBzi,oUf"LTC[#c2jh6fp]kxNk h1>7feom2lTYntN}pZoz_tkoYzzO{[D鞏Y=^nt[q=./=6Xnςkv/q2x۪x@A%:ۦf}hkř=mdӍ byOv]W:F*6X j8_;߅98&rk Lh@"(9r{)sd B88^ow@v2w y[:k)\v!2^Ȅ2ؤ) 1-ά',b bR򪊚$Ӗ34 ېT6VP!IOJP I8 85]DKߩݶmmwzCtIua=FUT>Ɛsr4(B[G_>1@c}ШpS 1fYI_"3(f'RzSl=.Et:x($62Vz&ոJ5,2NBS g­ӜŒ/w|HanƋ_^иxz? wFlХd9PjS (h[1Kk DJ<0 ak%MC1Jȉ ^`S&أۨ3v\I9̂6kst7L̾vq*jʨm(8!H!pXJiI-JD:/"%!<2g rELE@"f &䉛D #Y,T2V0F}2ܓQ_8Ex"رsDh_/i'sȬ+=$JYFX1Yj`ltf5)3O:AV H9>9p/xXmu2 V1STaǏ~Dj+?Q%9Os~Kř!L{.s칄^v%-aI 7`)nv#}治Byw bӫ2\Gt'/sS5O< t_I9{kYN2S+ gKmHŔc: =lTN)ޛS@[m{0Ѣ$B!|@*S 5h= `ZO4³bے")r)?IYƠ'2Vb"c?'pw9 7h}5a)}ԢCg5C!Cz)mٍa_EWX aRFƀA.m}&hDk*i3q#=Z9TR΍ۂқq*L|lþ4Mx}cSw_~ߖ?¨V^U_a+_Gt_Ev}SJ//ZMǣ{dؖ]5Cg## _>>nFu1s!=ĕxd1QRkDo9>QR{D8El#䇧2̔Z,U^v;[UBG);W5'p l8BiKя&I91[80m]smvazra`0~O:K4H@^hkӬ]9g6;0gcb.g%ijeNZ6j3[4nU]A׍wBWJ2ԚJ 7ucmoLJ8NV U-(CΙCղ*mi+!T>p oZڙ?tYgX4WvrH:DSw~6wbW#|ɁxBǴrY}cy+YWe ڕˀ ~p nIѨ/M+MYBeYVC1| vF>fzDknt(P.ѵiYJDTQ!h 0RV>mŽ{&a^3T:|Xu6ݿbA4`;VFfUɃ5X]{Q+%|{B*h/iR饆JH5 "i, k oA`. A%~ԍlemIi+ߠ ^2 +cBxz:~+}ܼpi2~MiύcDEu{q0RNhjfM1MHb12'O40+=jXι40 y' &)ЋߏķhgF,:4)8M|όD=.gpU]~nWpF[B)g55S bQGejflxZ.+ސR/ 6 /R_$|lg^e ];zZj@]HFVG)ԥ9a|J[+i;S9Rܻp^_ii|ɺqX|[\]LˋޮM`Wp{<ͽ גMg=y&>զOfчɴey[駓O=;g.Nór./rY7]RؕVgzX>#mvLk)h&gq Ԯ_O9}kuw&807Ibo ?%=<:CϷۘP}kğ5;ہIϬa){,0 /omTe5(d{JPH^za Q_SlL˚IZaf+ݨtA߲7TTuUNרϕ}p՝ȳVw\4G&,ҬW/>\Z}`4eS6I%Fea5NP{n6ONZEcDG(_VuM`uP#elY$ؼws]S= hkϖoԄm7Ps{sj#ϐ |r!VRnHwUCdx/k  U<+)8JT bP1ִ,ꛚ~&'/9NoM}', >EP~ƯȞeO%ۃ{FoE}>_vnFy!"*-kHQ${JߌIKiSS+}T{i]Ry.Ep])f-UZ@RV:Q#9TWyͬFkj4+0>)q:wK2 _|<!#>S㢸i-MM?o''ӏ sߟ}۽\1Dz]^^hʠ[5mṋ'{ lici`,ESsNeDsPOZ ڴ=>ph]K>ooZן_ߦLmp{;'?f?oOh%s*Lv*9kLj{rVvo }u8Z_qMB=.om>v>mj9{w/-b'KV̿x[dyeC-->ognn'9|߾+N^eCmѼlj̿gG\_+WzbQRZn?\6G%քr6^;UjQٌdܨR5\񪖼2jpέ,|U#;oUH G(9 R(mIɣGc"\S1ItȤRgy&4^fpdpErkMT9θ!W)!\`X2Bd*"]}\l] Wh .!\`5DZ'cT,G WXH6dPa[W38D\Y * Z "\\LTB+g%) J$֤+Ty"Rf\}1[64pI0} v+EVMVqJwθڵn϶/Pd<\\P ;HeppŝF$+JW$x*"\Ŏ+RW/+h3柱NJ+B; TaS&UFgm h6tw[fдcmΓ{ס.ͩs9SuY7߼Eҥcޢ\e jy*ޑټyg&]nrM2kֹq*9ϸ"4S/Qp"R'+Y Ǝ+Vf͸ p%7 HTHAs"%t"IHKf0&b4;D\92+N.U.\Zcbjrqlf̀b3zU'\=M칂O7cTȬ+W&jצp"nNW(W2 Hm_iTJq5@\/,NAdpErNWB"kW` FA XciǠv`ݔVẹRkUcߠۅ9-fHu3QGH7hz- /r:tA2 Hl2&2uGOje&2D,p ~B,e'>]?H%Wĕhb⾸By2"" HR4q5@\i3H{S$ױdQH-qE*cK̸z\^6t~_\O'xRϑJ!J OW L\;R錫I]J`[C89+m*"ZĎ+RiT+e1AnuS䚞u=9Tʠ+qkӃb12t,MW$WTpEjWrxpp_?P}snrwM%WWĕ`eŗI 0 s`c&UTX4M]̳ی2ĶAZֈWK[tAkFmCiMd*Gw D 4U|u` P5'KVx+9%Pچq+֭Ej ۷廣X+}w ěeNj亜?~sY{86M! ;X_dw?*'Li=BQOlpMӻG׼z|ʵ,-NGSZ(gLdz":ykRDTٳg%Ҽ5 * H~"UB3^WJh VB'++u*":z\ʸz\iHɺ"& &kE*"Ǝ+Ti!G W苤A,A%+˓[WRʌtRjGL2"} wS+H!CZs_{=Hn:sW6qRi՗+eӻ1 v}]uX`7U7"+W.jצk֫Ǐ+lɵ<\Zgb4f\ W1EBBA2" H-TJq5@\ &&k9us͜KeǯNke+cߢmC^S# !s#_]F9ɸLD&%c"rm*&2+J`D,F[1!\`HTpEjM%Aq"ҙT KWVDo]J!04$+u:ɵ*\Z*v\ʼ^2H\Y\R˻i"\\S}qE*UF"(Cd]QtdpEre25GP W_ 8ۮ93-z_(Xߎr{.QmOe54,emW4=R,%\ྋu+e*"Jǎ+Ruqc YGҤ+R /Tȸ &p7$bPʏȌH sX`[{OXJIjvec_4);"T5LGefY*;!sE@Ӓ1aV#MI+6C W6C@t2I%;Pڳw|[ט՘i9Ngj+:>-<Sc#ֻCy2Дy/Po$ SЕ ޚgV|OOzt])ഝ }7ѕ|t(5V+/lv.*ZNWRҤHW,fI+v7lG])pkWȓ^]%r$aCtљЕ n+thHzb]~>r+Cѧgl3չ'\~bp/<`?={ЕtԡDW{=>1DӕdtB]v~p ]-Y9tR']@&x@)PI!.h e#۳׮ٮhhн]L ;7JWlv nЕ_$}r.@D %7DW xC*Dof(Zo%ɤHW1tόp˛)c>xu(9DqKtfJͨ+Eݡӕ䩮^"] d†Jv֮nڕ[%͵HW'!`YRQW0t(e#޽2~s{KuPҁ]=Mz[qxCtlK[+E\j?IW߅DiCtΘЕu~+thI&:Е78IX m-r"N=w6h٫taA{fטoN !nвpv6A#)q!ypf䭢x(/QRrmWb7CW 7mZв NW×HW! psΊtpi3th-er^ ]Ed%R7CW 7UTrJQOz tUI?H];T9>懣iD#i9͛77A܏g.д׷u}߮y3+?1?gg5_rֿ4~xkŘw-wώ//,r)F =ݮixGBzz oYCW>=ŻE_`⃿60ҷd_Oǽf|Go1S.o5G'ڽz6߾0Uoq'^ێn<>5 wg't<<!<-j'登)@0лo?'gŭ ȗ^?+s ?_.ēzv$CjmrKMhk #J ak| $G|ڋߧH.z{o7? .F՘dGlA,Ħk,>R%l!9C$!;ϗV8 Y޴Tآ mޛ܌M\\Jsy7<ls}`,u ΣwB2 kk}0 cM0rzhs&t[DKIqBqM{Hzo%_Z`$KԭC3Zah6)EZ)y7wCR-!V<057Bd| ܍)aN"<=XzB8 9B }KucEG*U%Zr6G} <L{޽~}To M>g-2Aag}<5DqR c0 p)c1m QL;#}hx8ED!n@#!DO$w/Yl%Wjh2:DyKCR(=~8 s%=x\`uVU˦Aѵ\sϙTRmm3|3bɍdKvV ul5&z7ؓlsPdqu$~a%I7rš`M, vybKBJxzl݅"%h,操֯ YlE3n8'%ڑ疪59{0fQdbn` :`՞ =w|p.؁03/ E*%(&S <,{4:h0Oǁ]}YqHpU@D)S-@!VC,iW$xtypl5O%;ɺ*BfCmcݘhJ걃} # cHJlD6lmbAC *UD@*@n7((Z3@Pպ)&5x X%7#\"+`P2rIlXQ&t$1 $ XTPT wc@i:V Tf4|+(2"XRԠ sz R uZG q2F- @vMB(!2CaidBuڂ@(E&rZ#d^W1P>')zGhK`1Ft/3ːbuv<o ‘;^,:^UBNu~d}g:*'*]0dPJie.A?$n3}cռM,Z+`+tm4=r.),zXApMuj "@F# 5ژDlpή%BsFEKljdž@A-Do@7H!r+PK`IEf  `%< &XhcO/f\~BqOK(vLևc#{i??8_Kkx rrtsvggzG_{y{~&dWW^ݎDOFo=s+/{ _9Io+}߂I'w_ݺ{ބ<S:):[ =y*7r?,8{B?Ew [@:a3N7_@]n2A!qs". S|Ki q:8yHx&.}ĥ{C)Fn\K&US*&߸sKD" 5zn:L\Jׂ.:DJpE3\; }VpDB \%q'lʋ SQ¸P?MoF| 8 Їf:c"{Re@ 9iU9j&7:{T/ocgmI ZrV8A'(_Ky(=:>**EǸ.>O㣴WI=bc`AYb҅V68 F (eLd蓸T 7Sl"&2/zVc%SJ?-&2Y{=Z^12l>0=~ho'҄'҈nJsf ΄ [\QSKN^"1xn8mr[bJA,t[z&?OTacxr7^BRN[jz<+y +i~^rpִ~լ-و~;ͯ_gW5:=xkfWJ.nʹJh~.~nl:orx~Z}d`M1Aޏ[?_&CX{(7tu+D9'u[֛W"A' O"N'}'௸wSϏ*F=S(~7s&h)ר`8h <>%&.q츷Ӊ e1`D3^_#<\c8pnra1UBg1£Z`1JQ,)=32:\;#>RQm2Y{XC f݊qϪCE{Fti^MS6t zh^iwe,D nf1`uq퍇_js P <uZjLcUNh ŧRk͖ߴV ++ptXڈ[>b5J5N+Ժ<|&SQ%x&|/7m0 ã. ϳyDqah'S?iwv3_.C;͗wʺ0[fN.n^N/ʩktx7 #}HSeΪt~qj|t(hQ0aҘQ/2 1^On|!u3vR[ ד`Ia Щc'Ĵ T\ {mHGud'\Y}C4 `%r~+ $b [)*!?4br;.V,:Vp>;5jl!@4Ա翇ПsMǺdfHgXvr<'ek]G \_$J=ܤ^B֞UL.m-V LY1ɭ^u6¿.&sڟJuCw.l{ ެ*,mݿmeݕ7L{ԼRr>L'-o-ۼ VTz+HpMK5EW\6W e2_s펉qmeMlmk!m䝶icHѕOYe|Q@owgƚcL B""i"}pw m؀C4FP B5NN0"ZF2JP%\pK [p]>_=:_ 9c;#'*DC?,UIz⸵;gEJ)uZz 3sj+n_Ev~ýo 7vͧX<O,s@Z7=}J}yDT 2MGht؂یw&dP$ H[#+ 0a&#“ BF30d`x``y1GF eLp<I$)ǑxT*SDH+0`"{&CD0)2%#Ib΂wDðq}M0f،n"s,yw~1Uב9a؝M[Dn>Qێo6Ɓ'`4M LG}Vk)bņ%†EqI FT0ŸAxtv3tBOFo yA Z` 퉵+D3FR)dV@!m$#O/ȓuFI?޻-!H3#z0#,JN a"'>w;- sǀ ȚɳYXϓcsv_!C:(۳ '{O) xdaRzY`D oR:pP{Md4t_}uԭ ٫z߫-D8ĝ_ZՋetF}wN7UY \UU*& ]__=|+2 b8)w2o1^xľu`o-xpV~ [:SEh0K[*tx۠f)"5TA{e)߃ &oSc2>ZMɽ/{fBo8XwjYq}nݛcwokW8J^('(dC!2(S^b5k7_XAfW=򴢆MpQC=pigeXpR4`%6 Fq$) NiJ3"cȠ % tcQd20'_iD~Ėf[[ٿ:C,L{?7V;oX zWUE]Ջ!gSG! WүؿoqjҶGIMaL UvDWZY_Ⱦ /~h"ƒd(9 `*$wfhSt9]^<ԝ{x@UIh9Q~24uYk@;PA~3Oe{x2xAS1Ԣt[o+6רL'v߿Cω"Ov|沾e$VT' m$/``F A@PN4WEg* 6O=^Y|8~9.IFv(q(5E[(?>{ryv$>ke-e` 0GM,+&9N|zI\O\~xNS͠;.ϴ"`0ާ9_ BXHWcZXm5#̻3%㧜HԺgt=Jn_1Tk{QԌZgM.zr&Mp"6DN, A)F9!^sDYSrE؄ A2 B*5(K)µ͂UAvNܛ8{ 4=H6%IM<|9O]p'׻wbYO8:t Zf΢}b8Etڀᔟx`Q͔HC,2X:b`)?{WqʖخdGUJ8oe'_bI-ɩgq$Kb!.m]NlwO?=}:}P}mM7OV3W10Z\* :k RBk4"L!).hn?qdžq(]%aUֻbJ<3MkCX*Q! % MJ#\"'?< \[ӝ*eIIJĘ-,z-#GP#/1 Rp+fYdf-)lp9\d[hyo $:~sWbi0Uc2~3Pw8jV|0=W*4c$& O+7)߼.iϫ뮆IԽy F8NϴA4SM y8pi:)Ӄ4O 9AC)ARVڤYlEnL 2GOrt|T(fM%ǘ6Xj>6suEX%ZoZo)G08] UUrgY; k( Z^D0V9=FcBe-Ph nQ+*)]6\m6:@sHi˵%s$C%$ZNF(]FlFF^5%H;*O `x.g0]ʷLO 3:;-KnD-ўUM#Aa2p/e/oyh`AQ&ɔ&؈QJ uH&i~@֞-ο Իwd`dKw<%$cl4?ErjιBIu)r L%l%#?qVzH݉[}Yki$qc>X a>io[c! :ol;׼j\ۑmso9dzݾ}w;؂)`y厐Ќ .pۅByƤ=+k! t͜FS2Y. [Cx>wKNe<ĕ681ł6. %QҬ.W1DJ(/'glaIo a! (σs:M!Λt&F!.q֮F}/]X< e JF.8'D0!WR b$5W SKu'})3x-  - !0#(8Z4Qכ% &^ ̺t- M;3cnW9k@QcBiLZȕ/I I Ƥ/1TRJdд dzF>vV3&%k}I62i>=4`A'ONm3k? "!B!$X Qk5hmhAn' 19woC:PF+i˛ m5+}(#M&in-Kn=3\ dѥr6b-rVGsj4-Y{7vk I:bHml ĺEHyQ$c(pٻ=DC'JE*(t1baE[5&)@FG)R e'ֲq9G'.u |Q4}@2 ;ZGٰRAןicLiQiIYbYNڄ9#3V*4C9Gd [A쥜w9C^[nm'}nC\ {bVڛT;2!XӺIh`2XfR,h옍hL0dE$5DFY2F9ᔕ RT fefnǮ1#{U2e"53{Sઐ+UVa \*5 \W/p58}Z!4p$OZ@QK'U53L{i?|>W܅n@Gxc ŕ"֔I_ ck)K9(OZ;*R"\KTӱY"+!d`У5J] MXIZȹN^B]•arnx=%<.2YSZ?Q!͛`vԴ!BݟRZ*xRϏBjڵqEjJ){Ƣ#uU foKZFZ@ TWVbluUȕ{c]ju筫Bbԕڹl~2[uu?j%F]ݏJ1JC]^]=գBR{ Z*ZuuETJKTW 1 ZZ59 0CCL0?/!~J|64`jÖ4:?".̻оe4w]1eEW#ԕQ+ME-EWLUbʠF+_`+YPp܉Be1\? *wCD1_/ft=ZJpbmX'wvKüVOZUz]7Ƣ̦a8ۙ XC5V\5IZzX=Ko^fíG!țl K* U"3sGϴ^"ePDcID*Z]WLJu77늁+btŴ!%L1C@Qbp+WWLilu4h4tEJF?UAtSOp5J"+tj2*wjtS4B<WvjBB)AKZ z1{2$=mI>AWjЃ HWlQׁ+ޞuEQJꊁc حtŴ:{]11EW#s@SmO>VYP ؟>&I풴pbMװLuGJx_,qGヤ QJd5K)ϽD&J#,m+JU+5Җ1 #ZA"`TF)1wCD}j! 78)eTbJEW#ԕV+btŸ>Hj0)s 2Ab`+b\s3ȴ>)C(u4!V{n]ѬInTo!lJjM)q?1@BDIo["Na[ǂR,ALy˸H)ov3^%䶻s)oR"=`\9bڈ(*wcԕ((FWJe_]1e(uv]uEn5ypRtEuE19EWU+vJk1bZ}uŔXQWQS]-iZk'FWk]1;)L+x9>(m3+$xnmj 4ULU,6Z1"\]1-uŔ:]PWZpNF+E9[t5B]!ipj) Y NY'ev '75̘?YLA?KARLɩ{)2FȽB&ʨJ< RM+qQ =VaJm񢫃 ("8̄4\RtŴC M jr&(+"`9b\#FWLkSR]QW*J8x1"\]1.S{]GW-JW{qjNbڡJPt5F]E ]6r 2di]WLʨBoԉ1Ӏ \]%ai(kާQjڢmB8Ab(GWZ+52U5+AxYqQvT ]FW͡i"V@Gʞjz=!2uLvk/},i-9Ƽ}Ҙ)$E֙ټgI3V *o8qRmPw2Vayk1 IWL]1.]1^WLi|uAb`qCu)u<SI:j97k]15)ˣQ;U$ |c\b fhʾbJ( %]1pbtE@f+Xt5B]ELrW]pθXd+lz8QaӀkd4܁Kl'FiLf]AնStEVZ1bڡQSt5B]`$UW A75|W/5퓆%9r6G㣥^ZE_)Kߔs{(!%=!b`a0'DHʾ˕){Bc<}Du2pTbt)rQ1G$]z;D\@i+t1EWYo͌+fh[U2R1*ҝwNfV6uŔ1]] CO@rO5tu OHkFF'J]mz HW+(EWLQ+rF{Ab\"\]1uŔj1dmpx"7)(cdlyZm%)WUšQBszCFW#0֟O` h~|w,K4Fuvk) MPRQT]1)*<83p btEJVuŔ&]PW?ȹd\mi]+tj2E3+E1"֫uE]QWg[ť[PT){u~^ITDI۸Jkzy}vhWf^q.M|-,oMaڛ|#ZߚWuOY'.8{ n9 `5ߜy:k뻋j:o~4YumcwA*!fƊTnߐ5!e6ɛj)' /yet{˭c~qGf?&VEGx|߯v(o~/n+73m[}+[XMShMΚ8~w;?ΟaM>?!yQ7זGPuNnojNnD OʧkQX:^-/mxTn*Jkܜϻ5"qᨯ-t|[_\ a^^U$)Efqnzc-tC||awWWz| YNxhS}|.T`4%ToޝХ9U4T(XfVxry&=?#fxL 㼺5b^Oq^!# G<7DP.4 xD C];uX_b!vV%K,.E8O!OwLH.c#C.Y>s鷋0Ms["\y!U3FGs4ɹxWJi!W|໇=ʚeܮ/j6Ns6/;YSwxRꤺԽl yFyF/=cZ{<>U7ͯeӛ?\OR P=b׼." %Ɋޫ6ƴ1;iЫz=&0SDLE\WX WWtM$nD .r?Wo߾OG=c|Ӎ,W;y(7mML鮶t4N&buvLm$Nuݴ VF=N9`Xе=˛ItSvbZQAZ:YSk7<j ^NʾHCFJғqRLX][o+^ΙZ$av&8;}Xrt$'KvKVff;$V}bX8PT`'߭G [rat`1CRu\'pG#`fpSQ |x.IXBT"vQb,ǃ2顋YP-vR5z##GN۬(PND76tSY-{]-G_lzs+Is5(N3)(Gs>HMx6r 1U;.^7_-`T_,Zӟ:dhGeR"`K#n{&ةp9Ccce%0!776+'bJu@uD[WLz=1N_báC;;{xEKCЪ~&wl;lF?EIWh5o ehwFߢyN>T=>'eE"׈+8c)b4c (l8ygkoowpao\F=*B0 o`e˅Gy9bqf1Mi*Ќ"Rc 8NC1ZqG&hJxLK@&N$+pZよj?iĀ/8`[LS"eR&4DCD *JR&mB|nXf\J|2Z2I|DJ3ZT{/;oQHmp3* 6ތ I+ּ}mk|<=bW4+W,,*ox^[dOn_nF >9(wPϸ˙_?Dz@n*zc9֠\T϶r ɵ܎f|jBge(_o| Ђ޸Iї.0AoѭeA6?ݎ۴zH,c$םɜ3dU ao3j}dԨl;X>x^w;c;V`aƻreK)K=\8K;VhAzd)vJ,,^u4슉dyN̶$4X4 J9Rf$ւ+ Yr|<;sxg~t/|.gIJF}2bc(Q.3BO5;5a.Usg&viG#Gx##t$&9,8Mhb!ˌ,Lo%T %-qȪ"fen0`A/'\U.~p[ʜ\>^Ѱ` 0a,RԜNRy'i7pi~&UU’0m[HӇZź;6h)h9\ptvHփN7!6iB NZozj,9{rvQ#KG/QZ+` 1ثwg;ve菑S %Yvɹ  '{ʇ276caf%Zlc(z*[\/Yu.;-±YO :mc',xp±[([±[(۠ y_J'CK8bN[|e %E1rjqVPmoı_rO_f z"t_Gxag?.w.sRpokEQ.U;~c|L~ǪC7&uA8E߱.Rl epA'7-I/~W,6k-0@W(KJLYkqV!m"q' /!ajx=evP45HxiEq%40ؖ@J+ $%ژ69<Ň:j,Z1׬/=c`kHUGx͎gqjI/UC k/.ǺOQֱg^v#:xUXW"dgahq˱Ml\fKޘ3uIFvg2'Y-?~̒`e/-07B#Z\jߛDጮGgЯH"13v~vc#eI.Y  ~9!RW1df~kK:fGktePkcUrگ"m$E)ҋvoewXrZtf&ej201 F?woqԶdDBgRRﯵL&uDSc"ƴʖׇ/  ҘE~z6lǕƤ+hvlBQ/&x֓-Ø/`p2jLȶqhե_TV鏣x/O UqEqVi(|hiq^=\z>@C{r9Y%"$[*79VFrPH Ԙ+/5g Iߙ<B`8Z7W_^? 8ѧ; >zw7t6r $@fM97DvZ/\oO+r [X/v:A8â +fnevr ֊lT 3/AH&Yf$5&8!Y& %G&<OJ!71oæbuuάI+Xbܫhdu>1ݦOkSSޒ:Z/- BD(~"tjL14K$b :ht#R8'\-bRR;Xn57Mn_zI q&94臃㏼fy@gLeзKx(y4`ֲgwv 4P9 .cΦ}\,w?P*L읹!i=q:qBx`hBۜN?s64\h4F֗Nmy|/mkgP'm&5\)zW/sM+,Z=*/UC1ƀP_7smK1ʮ6Y#1g͌ Nu4Vb*.m.ҷCR,H%|Xe8).v.ٙ׍Ksk?OɦDSx&~_(¸Z==G?hO5ЀZKHRYd=x l2.L{5]^Ify0Y#9ךK hɏ\ 0|P倈Q.oOvo XʘoBN0U4>`᭨dZ9 b'Rs6h&aTEYN)"ҔJm<?q } U,,F &cB~n%DmďƟߐ%SD^>y|`IU AJT 1gU V(%n`q5<BY[QӅ6#b>]D7Fe$TSB*`7`3R@½W E<;9/MQr9@ Z[-J%o w D ЅKJ: gHJ}HWBwﮡc*!Xnqs^$1ɍ-ql)ʠ2lae nV'Z$ҮʧRM47oZBř"x1`TI\AS2v6w!ec2Kfgb8joJuԓ Jdx-?'t> 1*/<|̒rE_k?|0Ͷ 5Vtwތi6g4o˦2w-9M"1G1 KyhRX7}MK]o&]T 9zIn|:Y{cU0YR$6ok [,Vdu*"HjF!" qEus0-^mY[4Ԫ=^}e,%8OP,D)y"J}1^hW˗X cU l{O^(0w HEATwG9gPf)ǃ`I{S U^9BVmg:~P::ƺ|Ŵ.[Nzp Ҷ5a:^ư(h}WG ߻ ̜v?KzÃ1rTN|;?9EAav*so'rZ? ?'3FT{!r qv'ibb\f3zaKo[2 8d=$lҚ=u_ 1E-|ؼ "7 =Ă0I Rӷ9ta^Ƕd'ZG^t+jN<%DlB5Xka/b׳?ጴf7)DD3pH79FRV(Jjԑ2)4EQ'Z"U8T XUV٫R%q 1-F[kAxS B ѯJӶX\')|I;bMymc;?R!t0 q9@k\/ ٖԈԺٺw{09='TB {5Ho@er&-JK7y!Z0+rEXKm3# BkjhIxSqy]6ӉI" GkM^^KYq*ì10Lߠ0`+UO?ֹr+ӃvoJwkE4vh>Q>eK(3dk&7<`AsO܀0k9$Jz!^$mt=p_AB4_@]Ѻ9L[f{3h`*D'? <*b\>tclw~]27$sGyb5\bҶs_!l[shQy|'М|n7!MԣIzV]ao0!$B!-,Ӊ.0kHsݯgzG_ yLy 4ƾl7݇AcSHvկdYDJIe&u-/̈n&ZرH֎4lsZP!eӗa44ANRPRQTg]ɔJoOïlL]?㠨a +@0%ht;QxBo_ W0 T&S4hh" SSU<":)ULVvֱۻ=Wsq CAEC@HE*e$1$!hXc AZ欀j!u'ܒIW"XݱBOt+#a+V b;HQhfBPݒhIO]xdy;^B}E%:""iF]DLS@ 02,$4)h w)r JSx(*,[կhJʗG߮E)ݻ+jNq+WcWjPaڸ-u \;@]^ew3o(Sٰƍ+)W86k}pcw<|=9}AIyh}RJ@2r>U읔J%@ t+eh!YJ9˻>F};F}>Qe7?DLD,T6~' &ӕq"4FDHB$@"QiMa(%&4cfhGڸ,%k_: $LkHv`ك JV+X{b<^a`ع[>ђ bv@"BPaOeĩJpQ>f^<~l}+4gzF3,{%i]:Grj+DRG;X _;,0k5b j`Zvy<j Oe(1x k4^nQ>)K YYvFNfkCﰄxBːL88ToidǂCɘBl7첪K>>/+:Eئ Ep f۷~VwѢ>z| V+AK3vXqPD* Gh5 _RI3}0@mBCX͐ ;e-[g~#⢝F#}^NEe4(v* w ~ F ,Xh7(ÕLŷ2B@:%:e63( Z Yf~pFbo=.aeC5QF3#YLY/lQoiz-PB$Η'jآ5qOBBVX:nG6W_ Iaܥ;#`p@:>H䎁.g&ߋuFi݆5.KupF]-EW2ѝ;V޺de1a[LۅOw 9k.Ο_e&3 ke %]&6[5{+4|OS/Fuyդ̏;eSW.k'(6_)b&Q_ H]|p.H6ݗan>%bF%FF̬;䢍Q8Von&Xl:0_+rǽA_3w@]<*9Ës"Lfp4qC8@Sne1}V뭥Q]NmX$9v>y#I.KFvq^޴]3µ9n{jlm Lfy go,8827(e~0tS1>jb^c\n QIpYs9 #`PF|4qg!; !!eFu> K漽焣'#9:zEM}D'X .mߦf?A>[dQc5~g (z%7}ӊJ"^"o%!QST!a4ŒܱbcbC@/! N`1.kFw+>oݭg#X:ώE1/(g<}igJAut0]A1[9`0Mi ¥ fbp>p@V9І ̜_Oscz1?LLp?].&2r58XG}E?/o|f`01Ѧ ! +KjYSc?ϱ/hCE7=ɱ [;tʭpJCg}+Z(j  C+z9j $Nawѷ`<)#XX{\H: L]0GT5`>'yRT=>R0Ktv*\r$=J+BSLlk ci +pߴk3pyMK kˍw}I+i_0%vSsnQC"PY \(܉'{RJcmB0eYleτmiXvݝ/SSXH}nM2Ә,"\hd"1q ;s+"ʱɃ7^k%P{Rb+$IY3Mdz εeڛN  R_*1kRblWؖGqS 8V!L9S;1AayܳeyU+6}]o:snWezG<5rP_LV'0R}Ϗ5:t` ^XEC#PBk̸j(  ȼl,FXH &XZeb2|r+{$q  ʨ`\b!M$UV#95v?h)(R⵲e. TG`|:'+Rj%: Z[na3rb[#cts硂ˎ :Faݲ'$7R\ ꛻jO_"`ݹfV|6f=A2Hq24ֶE`I$br[g87 m}qVzt,F,+W=Vl:g]{I+ ^8ˣ$(c%yr7-rWM[F)^nmyI#E|'I+&co9UL]2r]e(H52K kܑU®X{"Ako5qo_JQ`uSÕmW3?f'5qA MP0a]؍$ 52Fʵٚ 5R}WK* }F Ǩi"`Zwe_S]1Лc ̵3׷{\m?rܸ1J$Sy*p}ė<8,lURxE"ס_?,['G֏2n~ #al)WgvŇauQ.uT1t8봾# lﰂ#Ͼ-ˈMF`LwSӷA G̳nfQA|#q1|{~x9  մ"ugKFlksz%N??Skbqn+Ҙ/xf`lW :AӞ̢9`1/bE ƐRj(kTo$mİн7o+-d۵VgPzPv~'T I*`֞ GR{Ů~Ge(}5Nyb̛6!}i>?W?D6ے!Br DURWLFC u5[IϢ$!}`a#gf7 q辔fTSnZi{}&;A{-B6ҽlSY?/oE0]i|dt9bhl6q0M>?˓b̂ EXEXd%GoU>t&\xQٻ G *;#Ь/qjA;/+-c6̪HKKemg؂/m@ߞNDdhVw2ĝ˯Ij)GBu(IE(ܾ:ڏ~e3u\Knd^{Mqlublm@Py(]L'ygbb#8B*/CC +:a5,ՋK fP@BnZ=x=m2&|"I×7 &E92t$[ k~xI[iCH;HKθ,,I碧ɄPu VByyF|7Z$2t#’wG(sgqZxLj]|Ra8P叵~tOG8#Y'IA<__6͇ÿGQ#/YYGh nkN)Yx(ͳj?R%MkA8Rwg',\Xwö)a)/롈pǩXQ/ƕ?;j<ޙO%,ҫoGi$,(esK -)F^q?mecgC<``I||?c.Iw N7uJWc^mM(G"-Juչ4\۱V$J-QI:GMeP-HKyG@Q9)9vp< >4X#dKʏ,xPDBx^gʝHB>RO`~V8NBv?A3beCI kq2>_j9Xf J|d]sRx9‚%E7562CĒ:ui4}׹njIJ PUóXXwInZsK8g X̲N18X pfɔԌQ۲hy ܳϺBMgl '!&9otQzQ nuW]~lh{:Wi-)K%=5%0Z%$рdR9rп:D`O=ۧ] [* Ќ?.ZNq ó'ꘫnVpgm/KKWϳWp>>;WN_GNחꎀt2t 1Uf]l(Ntrqf5ݡ( f;SNZKQ3= B;r9,F8VZOj <'G,&]o $?s1"/o,-Fd݇'B]w&lk5鿺 P\t䙇d) wTjGa2 T)pdIEvṘpVt~Ʒ0iytڤ=;+M Z笯+N'Q5j%*\Ը{/b6Zn6.+07yzrU?noEc?\1oѩ(Ű_/eƶy&/f'3asʦ)j\pQ|lfV2owQKW_it`K܍w{wܯ|y Stn&Igmq "P@JI`k=˽`kH ̎ Z lʦV CӶEVs^;/0;@3Ua*O О0Pd%Y*uJװ Ɏk1eIDIh*Xf2ّ7~MiS!P ZX|^nV rX3ę9Ex{c,}y:eR @%V*qLw8Z0*f8ANA! ͔ 35,Ky-+;:,e'S-UL%YsҌ1$^dŕa=X(<-FYR0Jݎ>jƒmݴm_S[J4]s ')X# D4X8Hdnʨ`\ iӰMЄXM)fFONe1VpRSȅ!@gH!9L-o\ =䧣$_V?IOYlPӈtIy5&ZJ"F>ntԁm'`/ÇuOš" /ff|.wR>ډJ'`UEK(Z3޸`C_{:oIVJFrM#!mݷdN `ddLֈh0%Ռa ++sJ|e ݢhʃTMy`/;Oc&oMftƬRA'˞D v8A h1J,(Stox9FEOPV:aj){~ UϑhD.me$R;jŠBabdF r,%!u&d@xx;$aģ`Qp|%~!U`̭N70O9J/ rqhddJH2 DL+E0cֶWGY'=׆jNICTչWT͈ǝ- 4|߲"kV˶˞ ό^*`pS"@BjTv9Cj<72񶘊qLJ\cs\^LGƣŲ-OJo >ug qak1ۡOLq.2 mawRG`%r14!?eqJJlXrB *0y[p_ҹ6=Q'<%Uѯ ;Le+5MipNxf?m{U[IF}uW^t֮,Slm. ޳%E[9vd裆잵 1Tp0 +H g3-CW-)y4l94N2A%t3~wiSSb) pFÎDor~ښ=O[bm]š3;,Yz+)RϨj|5 ͼ?gqc7(J,_4|n F9eᕌ߯G!o˄ks$U{?@xBur/D XeFσn-{ڲ)%ѓY$z_-fY_x!yO7\;R[M- $ܝo9=g]%Iq7VTM4R 8ũF*X̒_#N*u}JiʦiUV\lk>c5SQCV4p8 Foϒc)X/:X}FzП%v벗JQ+1IU h4$W %D 6 caCbt% fAڼ\M<75\ JJ*9qW`sˁzgԷ]w+Tev>v^U>HBBՙ$oQZSBs6%Ernc&8Q{|E:,S{ P5}rM`}vS g2`n*&qXcyQɔgsr!5=;C3v`H;y;t]@9{[}[\sCp-@<;BRٻ޶,W4f-@z:&~$Ȓ-ɱF=EhI&KjHvX:U>DR8b!]w1?Ա$ecOja)q7_:к'h2˛6naF*s)dlʮ3My*0yա(mcMQ R "P23B>Vǧu?'y2^/*a6GA7F>Op~~ա  90fãKuEtZi,m㱫q6Yb|ؖ1њ[+GWUXjo4x0S]hJ γ^nǸ<FF*$G`guRBw5ϱ1.|?O&god6#<ռ4()uV˜?I!mY&XnA*ȥhɦյ!UW9 3֤s86DŽ B3o!BZMfUVh(.#y6g 6D{ߣmf+'ctjb|vyy *Ƨz\ϾϝS+Bd )5UxNjo'9~~ &?$]5ge*)sJM 5⪐o:0MFN8DEBON":ne`"fXκ;8f4{4eGe$hsʺ b-v+2m$aBi!{ Y7i+r9:߮X`6d6Q7C_oϠx&;,xu|n#)X+\r7qHFl%n7H! }uZng[I d"J"neHndG\CzBTED5.ά.=ؿerTj,Lr VSh':J .sWm+h'&S\/KN䷏?%oN^(\of-QbBNg?J޻7ɣRWljk}4xw_!y,jq~VI'?}gRNMb'he'c\dݤqWo#q2?.lLx7}IEBCMFra]D  I78$vN2P4]_blb9d3xh~pZB9&'zԄ~iC^ uLjƃƓ< b/̔аsq'R˖1b*phy":L<<~_ 'қr>S29ڷR:]-c34g<@v5җ7;`{[a'ޢ$MH_+Vᩗ9O N4:O7TY~f{2A-7L]~2Flx\ZYF TҠ(e(kֈқJxXˏod,Jx32^ ^{`<8?=l~=[-I0P砂foZf۾>C@v%eȷlY$^9Jȉ*yZkp P®ytp[JSoRCaA2ƾ[i1C^n1[v k:Nyu)z`]('6S8:neI7v)lm+(3EYV{~/QH@0*A(^Ѓ$aw0u%w>%m|ZjKG] N#R"U^wydfM`ZMbw3}gZ3w|aUo&5!C٤@[?@=gP)ͦbsa=^s-c%% 1]z|qDƁED)ZDiڶy̖e"I$<\-Ȩ|m@} tĦMǘ# ]e Y$C^~ Eb`.':QIp$llUnE6N&C*I>$s٧߳ϴBc4x:ˏ?ibc-XC~LQ(m;?Kp ޝݡ+ƾ˱O1uЎv4"aD݇8mzFumLjE9֋ (ZL?L?f@j0X2XDfd$oTJ.?aIN1Gg wúf[o&/x[OPWWW֨y:) M6Y"pyՄ'F<jH+8@ƽNm8=H]sCi ʯwOu2'%F}X^gN[{#^tF#,'҈6gÚ-t]xi9:{] < ށR+'PgwnV%jdN/SPS)tY\d7}} z3 4ط/HʎՂdp<!8P*܃Ocl^'rL҂Z# I Za—`K-EyfQxD(U@Rw7iS diRԸ[AfMa/f p([mR֣_za˓qrb{ 4j? }\<%ǏYo] cgoXK=JqRViEOBH<"f(Nʼ1h؛L!kإ OECy64DLbHmX{I>yEdtNU)2;`%5/ϷG2+M oꛯ/O8}Pc}2iN}MD]͝aƊtz]0sEF-*/w*i t(B_f^1^vEBٶFsb%PMud%ʤbkj6 +c,:+Х9%*7hf,r[)11n"֥w^Z{]an'[_ qQl&{($TG0+Ѧ%$Y[m?$ȃҕ:а5(0$sW* 먽͂^ z G癢óĔ#@2I '^L9v쫏e6}70Ug+n~7r>Ŧe(W;s*k7.DA3,ʼ sji*Zh) 4`ʪIyW%E:ގ_kHYiVW>:ZW 6 \nxWU׈6"Z$\{J*nb@<[ͪ^)deAׅk iZ1nKM43eRK\+RY;KaMfY. gs:]*7ʃȬ$,u(ҀܿJT(7tM.L VPQ<4 2˵($HZ^k'[nQucL_Bk! ;%6z2.ffgE%ZINzAJqFX ko(|Z8Ԧ(i ;|X'POT,dlʮ_Yˁ7_ՑO64ϧ7^U-cehhl*AxJ6ّ ~h1mRUp.n_<1k&+F,&_f1Ť<;BIW#!JiIlX8wQܛ&Bg9;c[TWRC1^4SFL]u@1j4y[T)H %2dVey2}+iIfg2*Q)|t~D{kDnm50LTPAl% {a,==pZ6@Tѡe1D6zf pnmc[91lpU_;]Zs=ַf1X#z^[1X8rqUa #WJ2FMK;pﶌ@+>%v^u-c$; _u/V00NBFvPl^-c.PFEh]wz{_z^$-c%h mCo^'bafU#"qg-]roI{/caSK٠ݤ .F&^(J,K!'eexPouLaKrK`[*S3rSyu#~ Z."^-cDGum& Y7ДJ8ȋp2R{qwьnwT 5mC0u~r]\'ڄ`B2ƹULQ0MV1BVW.*R4KCen!  FkզHEz##wBpڤ*QlRފ #*VڊrcQAZ発mS14n5| ۋ ꨷4TR'&BF[*| 3phq %@It Hsn@{@b}t W7QD—ׇ21zΣR;?Ou rqM8PZւ4Qt)]SJFSZ+:Ӯ&^l4eW7"K7e!g̥0bCKVєl3ىGnGL;R{A aZ#i4V7,t4i:\JRF)'hD){xh|ꇩ2JO׫K0s*@9D+5|ς)*"  fxV\ lͭ*4$4˅u!困f70`JQa(¥^tpIqӪrUekגrfA8@1AEB$#P08xke .PK0%eXz-i]ݣb9}wqIFrX ޤEoڊe[կ3jW>kGO|^t|lx( ?dO( [ 0UEFGI&&}DNL?Q#!f)Y|CR\<_lɧ@bzSlK ![ 0=Lb=#"7ub3խL`b6$2rkdO$LBK;b׼=\ 3-m\IjmʥW.}оSd>]ƛ*fP1TncҒ#Kݒ4!Ϟ0dۮl]PlCj x "~ OR<#j1憀8+E3jv6dşy XiB uוx_;F \(__ F#`rd#öB1,K)u2??Z)\yJ 7$9|[s6gLI=)fhDΤ2^ 7 V,,\,UXRR*pqjk-61ʤvr$7kN[mx ~~%Wi_?'K7`ffNLRM P0T50X󨽺a޴y$&2^ǏX4R#WWDId._ɫ.8nH|qxC B;5߉{۽5h<-yHANG}~o* -m'%ֽoQ:١ׁ|p;ahI.|CةOYc3&^܍TuC$%ڭ&o.~ۭ…u/'OIkywU!% ]].xy!~O0}Lt :{fQ+ϑVa_UkIO80c$j1VJǓ%IPvw_쩳rl}#̍+RUJɢGa>?05ItpM{u!O_v[b 膾d| }[搔`Y/rh5^)s7P]QP4Ĕ?-RDyG b&pƳDӮ/;Ao\yCMʛtGou.Ekp-'Aj/ѢKoh|0J 7of՝M)=mGoj._ϙJW}˳22[iE&v{v,"XzO"Vz01kI`x&6j,axoo؎mn!gG,2ںMATH.E?KV֎7ZKoVx Rl*v(K}|QJ>FW $A;ȹu[\CCcV#qv,"XzOQIwԯ۲,[eo`#?E>Ǻ-ml˦#[ ڔN>Fl>hR=N8= _ iJZxQfDLG9 O%vLG^7>=/ѯRH5B/cD3P1V؜cž\fj#e/Ҷ?]L'6KmP׿q>lG8:zoƏW.?|gOa |ِ\/a*:{gտ=;||<#|l&멚auB9zX;uG_ \$nn*KԙãSܰ/nMImą⬗n#8sJ+:nk])R H|ʤ}٢Rb˕P.- q,'6/ӅcI7+WW<Ț`iK q{LTl̍Y6,.UiB֩sZ  v#[oy=yG:UI{mȫsP %5.Y˳>+ {|(67ӰR{sv5xך 9sQn:Şc_ PaʼƀK}O1i܆8hgQ>JlvL6y% c5hK39apms>p@A$k0DZUjkJ O]uMq`!6 -;ٺKCs:p$Ei}g] !z\M@)aϭl|vH)lIVUM ctT"T˥yحf+T}%m-,ᝫ\"evۗpjpC,Mb9PյVG3\g|LwXZI%O͂_Ss}kNFS&ՔTdՈ4/ZŶ뾏8u`j]r(`~`~M\[!]L0YnBqk8B5KHN躎ָk4dޤaDQJڛI;* c}fIG<9xG1<`B_&xWܯREOXa=] ӺJu^9D=X.AFB%t9#7e S(!Bd#jRil8&/vHI%{i+ W.Vԛ:./ZnN/O-O<^ [LRAYyD *%Iz1 x'MV/u~#2DXL NX`@@sJ2p-Yމ1@&PJ=xBd uBr&\{8tmFc*&M#05/m3sw3h#I4-dK-FxdH]i@,P!~ST,e);RC"lZjB%s+3'R?bZ^jIA۵>Ҁ_yڞz{<Xm$< ~G{uH25#$:HXjt)v2(~i=;PVE$UY{eCOGk6͝]S? &f,k6WuvY}CziJdmif|TݗIhG9[0#pvlfaྜྷEQ1Pm!R`y#kbɎ @ \ɑ} iSx?@R&ix\m/rZ>wHCh#LK,;~tҥoHKٻ8WI[} <WEroܝf9KYV Drw8S]]Uu׹˫7H+ٷw7o^Vɸ5))fmhEšE0I@TWbu65B RƦc=Luָ(eӵ#~7X<$ x'!͜}6^ݾIEttv{{us|eOYKgm:4㗭.6賸R|""ovJJzhOsO+oyP<((X'޼ڼn7w$riXi ._e(J(i qNEXzr^w GY#Q YT7!jJUݱUSpx>ii5T.Ja;_.>dO0h*CoJӕ +LW*JǕL#"sbvPψEIk7?7RpX.6)ƎXZ-h{rs` LPd[؈I;I P[ Pn"sg &%JJRN5C ""RE"PFv$:Fr=T &LP-ZTTJ)R:vt 3}s %lb5 %Vg$ U }OYz}2\\yS?#[:(SA )S <Ly,)G L$ "69xGv 8k{o*㒽~q?~۾Y]l|DP ȋ F1K1fCT8J "= `ETjV嘁|[>NqEC;,3Z+^7YczWk#̀/t fn Vͨ*L& S YlYs4%pD k)9MSv@p8O ߤWLLwBoOYFt ,@q ѵח ]T;\YsU~kNқ!K5I262~29oZw%X 稪f^["ٸn)Ofw/x`#$?m7C,B" = WL Ș:zd4<%EVab3, Ys/eCiI@-KZAL%vJߒKlVPbip,DQI_P0V,JK_D{>\_}[Y\uh3 83C4@(qz< v+wDK6ٖ)<\Akl,Q3Qzb% n6 Y/k9s/#]S֢dIUaDzi<^G߮wc> Vk?bt>V{ϬN帹S~'Rnv&LGEx6Q{D7ٿߤ-*@HJ}mY`(,HTf:eJu#55;&VVW%1^'JopYQ0yR:5R %bfKM%g+ gI'DhY:s z+x9S\up䠇y ۭA_ʫ0!ʾ oKF-_Ou' 7äɹrՋYb uʤˋ =y/is u`(c*j$e C鄁 O.4+MGϬک\$ѣ$c=XzNɀTw]D =|{zGOǫ;akkOC+3|0r~j}OOynO|38q nfK}]~kkL^5}OY];䨔7Ǭc"JrY:w-)S1u&蓅fx_?.Fi3g) ňWڐݱ_(Wp9SIEy~}s;}I%f_g. ~\?]^b<ֽOl1x7p˙W'5WJ 6AzN]%B0i9(S=>bZʴ5ͬFOHǴW6kiphm1ؘx$8 u()Ȼ&V>=i\y1N388"v s+AnSF&7215")a؇vYI2"f95yݦ|so.wG|bȋ~P2}RD, <&6Yl5I>QMy9D,j30体~5sJ)">M߼SpkZ@r4(Ƞd[HH-Z rlGB찡AqRZ>-zآ!wҽbUv&-߻ %j y!*PKfH,s+5A\QR *fT7G<Z?_..(:p7x(2qzʻ͹wc#|(TTEvnvǕ/$97=" %0gz1m#k1`,ȄY}uh)ȁkb)0SIOkϪ'gq?vA~2_J;篿41TߞG@S\ #~˼/vGO _]ҍ2| Tد\ɇw?>g:)g= _.9L) ٍ$WD$H1qn 7k|FzF<1tw>`SH*w =3}޸ez| G pbwb%qh;S4pt9ń.2Ά`޻m%vɇ?R6MdKܭM|Hrȋ&yOn rw1 jJ$&7S*c4D-%S|{<s?C p8;&5euJ$wZz @.HW&C#M`$Um=Ĉ^Éqm2p]o q| z,SxW6d%e-X@>bQ&{k}12vвNESJ+ATR-ڳҬdOe^;697=Ӟ$`>UꝚk d !sֳўpCszmHRarwl` OL {Փ}r}jB,l:zUWzKo{A8g0X /FIP܂I"f̓MvV1 g/]ll/Īz7YvfBHRS1u'[̞mQ?h~lҘ^ -MŀHIӴi6x5 |ńUHcEJ=g]}.1d H:ܣ&G4{%7AVvP U> Žh2,U  C`ӯ q48VR+ZT˾h(YAA X\l 3zʒa|x3a@f 0eoӸiw-AsH ]L:*] ]E6m;yE&ج}`(A'cJ}7Wp*E6PH 5e]%eicڨf"1z #j9 k/%pХ]gșoo80Sh=ҮawY֨\P硬1mpJw24,i,R "m/c2JTX loe.>ZDtiY/oKhTcv9o[Ww|r8-PҳTK`%P#b">㾎W/n |J mr=[c[bZFa)'A3QnEݼ!CE51䬩֨*kr-5*6Eb38q9~7i*\DZ Hc0W{Mr 柯!ѪUZ풔GP&^kK2Li%iFD;\Oó ھ{5,#2%5U=J"H0-"bya[EK9DB 2WlTSTjKq[e8ymc|[˵= Cm(e0=+!tWQ$O~-{z-(f/3~i$+ kB52î KBh)b՜zSi [W`Ԕ .{G9]iJ,1CJ0xbfPC˔`Ky ڝދ{-lq*JhsN h94q%XY|p V(mk?^`,7k0kUBkh{3e 58|0gaBj [Bëі!.dZ+g3 o 9',ƠmIz>l}eޢ%OM\^OR*+@rhb8IEuU<2(|ƬEO2q+󰊩QT 9Ÿ>¼gD:,y8 hFU&t'D LRi~@G]Z: %r'Ô~p)P ~ZM,ICk4cH~ęSXױ~^5% d]9K5kΌ?'ͽZ)TP$ՓnL`~bie'ܥ1Q&g5,& ~_

al AYHQm1dC1 LڋSNVytt82TFvtʩu~J.x)PZoUPLwVl0,l@:@%Tw}><⎐̕BmN91Tl=)*)SIX쨇٣y2v|%weQ;ChPE)+BnIj#Y9L tݙuquC!oԋ]<~Ȗ8dqz K8(>A~q}=,:f,1Ogozgmf[Bn Ɯ+mWtA+k\F!e4ywcVwAѧ Li׾v{k!F*BOҐ8>Trh=qĠ᭬nK_}j86H}z /?_\?_O\jS:w)jo} .3(>O;L< 6y_L_ݜ/"t_.V38o-o[]Nm1bjdo%L47등=OlmYgT!G<$%iwtSntba't\(h eqHod|;߸oGu"?$ߎYdۙ|;D߼ٯPQd+fY.(-AppKǁ9Խ:A{Wҧ&e1=1aY 79+do5A 2 7Vt^gۙEi`ڙWY^NPzGȍ9ڱ(~ 77tn<+cSJκk0TpGڦW]]7zz0p(@ת90j߶,-_姶LMZۢ-Mn׆GʘPxbEr5=7<E\{ap+U';s ) d40a[`&4{\&y()ϴ7Crr|Axl#Ifg8f4hNPt;E뻻l(o\qy1{\dKr4X+هD'Nzܚen2ZI"\d[?Aor%C'7frBe9'ȹSo7~;hTz~g?>k-qqV<8 's?3Z/7gqOrŧ^ t|>4mwWöƭ>NyTy\ChjA P0~Br+qZ>w6NEcʣZ͖= f2Uز{zQ!Y Mn?y>PT/Eց)GOinAW*}`D"4B-ԲQQi~RxxJNr2wm$9rY 03r,vn?-zJZ*XBhڑ!C\u:EȒ@u'",Y h]D&2{(^qĨD\"NH"Q8Sр9R,YzP>/>] D* ,L`+R>ozZ'*fߗf{v׃| 2ylIݾp~Ywr|{*RZ rMv:-84żB]x5XO9"0qSϼd;S<{ZnEa.=5+kk&2S,jFhA)jlCYتqmUNeu"ҁ4'S]c9٤X?Gq!nsk}WR˻%ŗ"M6Rd]ٹ\&3EeARVا'1Lv39kUd.8(D2xgdV2Gn˯XAcqѺs VqչzّY^q3Slg?rmc#=9|.Yߕ{O-SvT̾΋ecUk&,,S'kh蹆?+X( Y'# ي5Ye u6i5Hճ7[d=6` E"_W^[sN^̽Ҡ0Ab,"%:U̬(j!.> f%k>B"A8SHѹ@LPߧPߓ-ֆ(랂+ *:cՁ 3G540|$8v/rcB /j'U:vE_>^Vz0o38_h+r" )v@ ;!w6%rbzq7ʙ Lᷳ!_\Bx8EvnǥwSMmՅ0oD̿N6tSX&] %,O{zloA3dN}$IOϧN%BLz[ s`/7@MFs7}EE]͙&)j=$cr$M9b2Ew):uS#6\3ƀ]-J}GfUw훭Mnz[vB*$R$;H[v~;A;߾zQ"`ؗYGl=(mw?x4w޲mɿlޜn.rk9ѣP رCzEU)Jؾ+b{_Q#kLZcma4kϟ(8IQ+qz.GQu[JxV>9qZH8㢴ml-f;=8\êi}>rK4QSФ+H+FvFM r3-j3Q= -'Eh[x}?Y15uv)m3M=hq,{n%xH/ӥ Xh-"`MI%rJ|F|㼵Al>LIzut┷<]({:pC-xǪFrpΖUA@KC0RY<.V@WS2DkQz3.&)g;vHjŁO]~7ݾNRъ{^- :|ؼP2bkx,h oq^ qٺVkۚ4KKa-uR'rLm\6nxp̼]RTSp?T-M޷K ] K A>xصH[3hR#+־-&sE)1z[B @ |R7 Ve۪|5<]׷ϽmK~mWKinNQK[ Bc 6/yƚZ9CgF 8&&jeHƹ1̳VV1DjX7v'AqNP "-C%*j'=5ͼP$lR-m[.>Fk*aܙߚk Ádf"/RSI`ZjЧ ڋO&ͼfV{Э:3c0P!}㛓^_R Z7AE?t Ŧs4lbUR}(yB)ց蔈UfbXꋽUMO 2^:Ǫ|r,  -Τz,]%[X.;#wTʫtozMd8fx[{'fZu m w[}|ȶ{gE*dkߥ} h[m w <һ[F<Ov}?/zCm9n-D[^Flcaƙbܶ>KZw_%ZC%[=Yy$A͍851Q;#y5RQFVC#kj'Nc݁9BT +X` %G{nԮQQ|CqCvFx]^>umpW{.4l<[?BT;Aі7_,w+_HK~D礐WN5oнEQ#=&oȈ+lyAA;V{Mx?#͟ sVI2 AJ4f߆Qäv|Lʈ90+u-ǣQIOn˴[,3|2]W~-^>K*9 _z^?."x?,vڜtJgs@cRу3Z(6ȟ>EǶOVӋ/&>ԄTα:wAc4vܜiѣPQ-cz^F@Ųʤ#Xa'BP kM"Q!欉tv[F ~WhĻhF39kŃQd #|C*G1!c X%n#ϊ}Yfqk +rbF w"E[uUJuWBd QY7OvESI$@ %0$ؤgᾮᾮ놇مZH $h!u*Â\ %"dEyJ%o.tv}L$liΚF eǜ#ֱ0ϔh{c5лy]oq]A\ j /" &mf_ˏ}nZRFDpo#ezG9XOlK>=h%# la?w:C 7K~G=I"G#7hlU+VpdmVC-w0rq^;Wz.u4)a8.rWYbcwyJݛ `mZכweX])TBޣie>OKWh?6zzz1ڈ@\Tg` g/ơ$\nC $v34ؔ=|ޜ Vq?RǴ&F*(=f2f{سo5RvRZisxvmE6?k:bg̎Qy, @/꽮eҿ^_]ӑ7'cjRO$⹹-G,V޳HZ<rWՏ<^q*{佐KO o8\\^$马zTFnxO4\^&u{>G8,_6@vcNAZ$_ ) )rH%t=EAւ45OE5B$ A0beG+CrH5y_8%+fj .VV7@&uww' `fOf2f * 4xV:Hd H km@*eY#TӄRJ&n5"&C Nӵװ<'gh*uv.'nWP8W$pBqEfd6;#\3era\=+'x걟& [=sΐCe*wf1]v9 hpB)Bڙ XgQw#)r&<,V)ZZ29(fYq&g%C7ݟyBvl9a:H" 0o[>VW.jgqy`,/hswR{6pJv (!tE]ќkCU9W7Jt|OݼF2Ξy"0B@y$53;K^K:ŊLYm~/)*" ts41!3L !Db:פEՖG}0qBYQҽ9bѢà kN)߫䛡تV)d`aW{j.3oץ#տةW]ugr,xF/+;K~AZ) **Wn2Bk6<BV:Hs56#L/[4BҽŜ˽zųtcƯy8NÁd4<s* T)pt/>ţ}a4o+#;ht1no^lWU7𓽺+[Z .׵\})z_VvLQI Z2.^up->Kd1͔^7.v Q7D0*s_;1KMVJ9w`L.4o>S{Lr.|dDfy@%u ym`^4?Vrי\8d#4l\s.\ ܓ=v.i2U12fsK?/6΅D^7{7.f5VpӻO 't1\TmH})m̬HTft{ WA8\qpcd*w~PAx4yѠ>hv?("w $D!ASrR4R `CP.O?24K_/*1Ϛ A,4n'= [÷x^,[OQt_ah-Y]"PCnJҒ(I dE 4z|F tG sJ#d48WU#'ʡTβ/?v;WNƷ~Cc8\lP5&4>;,LZ 3+ XP#V A8YWz~q  #*rwpf,ǻ}WÐ*yEЕw]Q`qnGreS/E3 Y+0*!.j;\{@GU0EA}9ì]u>"doc@Y{~ve d];5sR Q3K72oqG_߼y5bJVW{Y~+I /Yuyu(OWNn*ҼWx|Y9q5lw%G{5}νfiIsU P-{ 4ᅵ Hb,T"!1<xQ>[qϘ)l;Z^>kY"#$bF'D.DρIAQFյNyݝ=sH[ɽ=Ymo 1 cF \19_H~#E=?fYm0 j)bbG@j]H 䈚)?+1c.v}k%hZ.>G1ڍ>/ZԒs.Zzu4meD|r߫?}ݏ{6Ⱥϥ>Ap4ʱڷ!߼!E0гBG革0JI\D29(Pm4V .Jh$On*s %S{# Q;[)_LkjVy=$x㷓AIPƥUoD0LnB1^X?(\G_\mUS jΣr鸐ȭ@('R JgSRuA| O~lx #%4 .wTt}v,)$hgk"΀ԐƘܤ#YͫUnt}ʫVV-]9$8E溮sT.;Z}^UFegݩ+F&#Wc26#`GDٗHs_`U6YQ *.CaѦj#icmX`rP/()="}V U9:U t"fLuIpV`Tu6qC^*^HUT߅Haw|Tu*R-,-"MPK(y`NEݤ49T+&T%JA1p^)H+$h$SM`I)=DtJXg!p+ڙB SQ(-AeVZ8&!¢AlRY@`r'#$(H9wX# cC*1ƦjOahǙu XMŁP!,"*jd\/cK|çNw\ЖF{-EpT~ M(*iiSՂR[g%̡vƆU3o_㷕 ?nQZ>s1Pҧv41V˗4ayQ4j:@!*&皬s<()Ď=9`NF ;Q6z-*VxoPY *3&_\ECԷx;F {aXڬ/k\sg>7+,',$7E0@n| ڜ,!-ˣl|HGIB0F-=FXBRJwv%p(6H&{LFe;Rn;?>`GEuߵ1MbGC"IE`9$.oKirٛTD@Т1em=ʱ. -=u4hL4Qsdw7` 0SDcSٟR@-'F {/0'\5aN'7A'pPF޲}WdR)Shb\Q'ozχQXְ;d-+&Z؎MgԊT=YCqvrg jLr_ rhnA+;k,]ӦFhu4-|wH*-e1 ,V wNXN܃b]46pɦbf^Z)ؗhZqbRFۦTZ+J7ՕJo; .lQbje "YT\=f& (аMG龐Z 8Z@`;670$bjp uc/8╠F'ym))$> qp̺^Dg A)Ñ/5#*hby.U#u2/`ASMyڃSЩ.gE1E2<"R&)9ᅴI |%ȏIC kSy%  f ;~`όAqVg P>ÿw2dսp[[3:~}?%\)Wh ]z~E}k/7CTw?nH_1ٻq9 b&c$(L|+;IOlelxIlYȯUQm@Ϋ e7yv9@Sɟ  l ^b>/bP-C!R;{`κT$bA%ʤP(l6 &%noO Zܔ]A$SSHC0)U͙q2 q-3a0M4F"{$ x>SS{A ̙D?FwI|'c9Љm#T iu͂Lry2lCF6RLӈ\vv#rՈ\=riDv {SB^CkX (3jt`AdF2TX/_Ly=eAy&vD1j=!.-q w.'"/+׋AB "6Qi%B! cfE@'cX2+R& $5$&sa\ a;O= y}5us"T!3EaK`FU`H. cZUlcZ2v֒1;0P X[T7gwaI|\߲sqëKۧ|[S%:,Gܲw9s\Cҙm {x`g TW?Fۯy'^g睢%Ff1vh(7ʭz]7oAt4D pmvݎV+i [6?sE:ÑQ.6QAqpY;  G 7X>~|ͰpX#[x^6c᡿VFdVOsֶu'ݧ-Z)עf[^ kA]8!݄ҍ!R3=yvT4qig%hS'ڸj!7>&SqGGj"3F<.qew#V)lȞ8Ql'B߰!Ⱄu4Sqg (lCsD@onP܊6OC;a%%tE6]K?Ok$$!룰ۥדvr v4;[^HvC]P#=yO}HX@DPv!֮V}0qC"E 1zbpq؁WКv݅.D* OS&MH=em RvLeSm:(B2$} %)L\\_)W{DT@;Hogu3gn~!qoE@X0U6PȤ ^,A?gA/Yuk\+673^.nzM IHоO9I8RӼ4rԇΛJ# tI%7;QeRe$ ubմ-mRNeD$90lICvN3)S512sW"ez4 Dag.VpbfiPyޕ]7`PWBTJ W"ߖ-'ʩy8sC۩CU{}h]wgIb{Yܻ(PuV{e\zQg?pr_^4O` iu%&·F*o3_OgŧeШٷ'a ?lK b!w7T~~D<= z f<'%譤p2/N>ůN^Ng[UARscMOmIYj>Vfz}]ԇirE'Qh*Aե' {(?6E3b- =7XgA^+}O~0ij=0 :OV%ۛ<}sVsp=_?r7 yޜ5;/לɋQ,#laGԀ9`Kgfm. "=%rs1U(n_۠ Z,$ӧFwrodk|W]ݺ|uGִ( ۴5mHruXW,ahV;]i3 }Q)q舂IFHdրN'{4hIHc^iapqNP *aDlR;a"bNԙF!׎ haygDyfÅ"FjkF +6h" +*0>)IgxeOt˔BB,`uDG8@`rz-A2(l f@M#8iRv. nf5"G4^e&Jz:,@H$ S+SW#8'Y8əFzԘ.&A=\jYԈCae߃+jzݭIۢFd5q5B IHP(ҙJQ:^S#:4A_QiIqc3(ahvhCŊ;#(\z<5|qXS(nShQ#$㫰d4k\Mqa40|Lw&p==xn{ƒ?`i_#)OpAh>'r>S 0)|/~"M^v?EGï;s Te>?1} h@yi3,7pǕ ¥QT AUN #s 0)5)2x $Rc( >A'QIUٵ|KWKl4+Kyw',&wNx&GœaQ_Y5/ˆx7.wj:~sBmn9Oy{g !e˲m1/,oݢj J|k+WHM晝 {-]eԻ,?-V=ztg/fDa9mr]OFX6axa%FtRf:ՂNIսq頶69|_4v\2¥By.͛hxS E}55A/ x7@"`+N0L(gA0~zC.g=(݇t}Jtf2HA˅Lh } TLc*OV2=7Z EU`k_>XS ejZ4uԯ WbWR#H&P$HAQH)1*˘&2` ȹ;J 6Ԡ+M&_\LoyM @xNb6[\+e ݂q3`**w7>, D6ު% jOM_-3!5< ;e˷~h?Fhm.\qMY?̩=)3F' o+f;á{r3:ezNO yxS#u~fNoqG&3K&5ĴΉ: c6T?\4V9B!#dz!l1 2 |$sR@Q;1U5wÙ ٪pq7CJASJlBR4Z@kBaM3Tߣ]Gyӕ l:F/z,' gO]É7zԾTS%tG1y+ sդUoDd@f򼽯B=)?v@wTD/| m󡁨U<:Lfm;Bn|`I~h6&zd|,I(_@q4 a@e"9Se~ Ψ:4WavѺ=bj+[1v:wsrhhP(g}LJ#羿pׅ U| NKN2R/ ehg8IOv_KMeɪ)I/N "5yZܩyG#s+I"ځR@Rt|wY \2BlZ2ZkA1kՏn- }%ƘH .&Zj frΓX9I`ƎDSbYORZE=uPV]2wg>uUPd&&FW(epn$52Ԩ"'#ogpZ(+tH37SãB8.S)9*AicRr5f ?`4VgюHkE*UuU:kQPrg #˩W 1e#>Sj Pyy)x7ˮW-#V M*- ySi>₈gÊiwwܼ]b%`ѠԺ)M]s"$z2>{?ߢs..Jԧi}ñJϵZ߯ i? Ǎc n# 3:W!Ys1W~TXv;N~‘&ddĺXlZ=U,ySe"0_p/-Y?=1VOD%%R[;;F,w]̿"䚛 ݚzq<Y).H}J9[lӳK0cesKROcUfs'ũ9{OQj y@`" L@ԇKNe8~6_nnn?-?ۗg 7sӪwo.ڻKw~E-zX{84{}[W/ap,h<1c\hcq4^)}`3!~dIB `,7!YQ:$qL|N2iT@T[8 @0;NeT` ||I]mђUi'CSMfvpP(&:?Hr3Z/V;;sN%=' 1~CQH:S y"jV3"SjRkAdrϔ_]LVN۱<#Ȍf W,ϰ,`E-"Enw6UP-r0mԊK{FYĵ@*g&pVED Q1kw ^pf2p{a|ChM5 c胪>K<*p @eVjDC Rh{? {3-5ZgRǮ>AxNZL*W Mog*rw&-da3HVMfoN,H-q @"!G$IBH,~~cr|-8g8D=]W:ZѪ+V)m5YΈ])]ҖhϡKڒkc.z|*B6TǼ;qʐK*OK0h|I(-ۨ3<<%miI!͌[KӳK0c]ҖK(`_%|R꒶74fk; rA &"+;!*|j+HLRHLufԺ!\E+t!Nޟ߳n_~ ֭թ}GvMSu˨fԺ!\EawƂ=K訐hvѺ4C{ BhtFADRÞk^9']^83{0r[YSuo%UևyKt*/}Oyԧ&Ub;ӤJ~&U~T/⧔[Yʓu7une_^8aȍ"3맫Ie:ѨzH+O;V% W⤿ӳK0c%KT(f=~gYdHJ('f+B[SIbߓ)48y=%-\:dA0ScQVgp#/@e$.D\+ V(O"5V[1Yk< @r lZՋ?lȁY;j.̛DJ6QHbQ&e@8J&$ @$hI$zQUP-rzJ'4XMyk[>Dz& P OLDpp`ѰB?l$0D\>yk.ռ=3\_׾O)ʴu,2uESa1r)=!?VAԾu;߅2W3ZjАWU:=}fILʬŠDujǺ37)R6r)()8\p3vTYSi@%БDi+z[ ϗc,+6;ή-H'X{{˯1KvLrn,n,޽wtH~}i}{qA>'5J.ߞm?_<%Ryӗ,DԸZ,?KI?~^/}wo8!{J4=ymA(D# u6%5/Wy3hdz([_@Ym;=X=^ۏƲ&ˤC\p1HDIa X-Hd^T)əczb0C]n>6Spj+WWF8[Õ+^NJ,VBEvShaoo* MʶrќAknMQOcWQ/c X9[ 3EH>T~ 9 3j4cH+m˽L۵/`XJLbq&0jS$#&%]Ea6GT7Zue(kԐS jiN2yN{l 5PDZ>M^5|MsM7 tdD0 BeKYƅsR%Ş#^hlfy`2H6X0 ,:'𡥇61eqmVUH∞?/NU :En,>.V-vK#zl=f.m{ Q-ҨF!i$LM&;I|VߑWBgՄᢨ_Ҥᢸ_Ҥᢤ]mBBA"$O$I> ip 1 tfE`4X}JCΠ҄cץ&P~ _`JrJ>-9NMQ$ 657.Hᶑm! 6Bl$Fb~d/<}}7v(‚WħK*qّ1nB?rër\pMrnbQ 4qؼ~(7 `'v>a̪:ҙ[UGXF."?7}X#bٴuoG4%f2|о3( 5d]るlXv-fdn-C Vmm= OB^؆O!uϝ=:t&5h Zlt< 51SȘFIx{ݞbr8_!#2k#h.^1A3,{˾jH|Q>1-K<M+/W\K1QV\dDx CeF[8esS+< ۪|ē$JawI<9Uj,U!O MVj)h><9@ -VY%wVTpI n6\^MX #O]Fdnt'C|rv4XW_O,$>9%;Zu=EP Kȏm=9i^UX=xV=9H9zNFErht?K3P\ #crAW̖zR}w:}lE>{kUyS BY[C@NOHukfzDZw3n-Nۿ=dw=A:5X\Ŧ>xoVAaXH#t(B|΁=LcpqiDCC+>jAW_nT؇`0e Gkc_ҿ%x"LhVa[T-޼2TxCKТB㕡B{_TywԽUG5:ʆ|ހlΑp;ܠkk yo@Hoi@qpK_=מ? t޹мI0sUyC\B& ]߹^)8&)8u<yo=ɺO#e߭*!]znl S<"1KΝmj!NmlNS%m1ns/Ӎ6Mg6xhrB>%e˅DdB6hCzp&$p^m&?1 ^OM0/+/޸ᒉsLB",罺N!BZ')m3#@+Zw@I@oh>h[!U,MEysQ  IngW6ԣUnЮ׫dN0> "qXkǺǓuHQSI: RC Vܹ>-8kB8%| QS1IQBDy¹XY8Pپ^' (F8{#}6>4Z6u#5ϣ7 t[mڭX|7Go__b]^s@) p>Wɡ$^5Yql9^]q VJ+ /=V3Y,)"#p/F̺Ka(.w)s+URa5fKo.E9&/s~u̫qxqHgik"6zp~Fw2%LSjF+GbRe#*tWJxX,M>p!Z0rnfQݴHZE`Ͼ?2 ٜCFBrLbҜJ CQ;­3 (${g =1@Cx,}cT_Rp>}(Q z9hq<٠y5Drˋ܌):UFg Y4g3[Oh:[E_k drQtǫo67dRP'"~eQpU~i"(;٨|!W`wZpߎsVb͂庛7ҳ&߻UF֗)BÇEv޹ﮓӁ݈™(tp|H8,~maI0J4+'5ZGRo 20b( N  砫0TJ zbWBf\^lEÉyi@hhy\D=T5\ŕةnP1}A#8fmfA,]о/D< K^@<<6%O.Y,W[yLKYOND-W_oW-ʾ32tҶE31?$SLC.ẎJ|gdX՝xJos˗וOD {R3ϙV OXxAںRlHO:8f31@:Pl3@ӇX ٍԜWHqcRR yR '"a1j"7hfB iВӠBROeǐʃqރKGz-U>Kfб2;Y{oX F%Xk)BF,/{<=i6j&0MWe̶ll.vOd[5|h Ό| F1h#QB5CR${j~4//٥#c 8Iv)fA2]!_G8'Wë. %?F21̝sٖ%7|-WFNc~6z=YO7?GoflVux2 -IHwhUh2٢Ua9ݡU-F/V-Z$Z 2FP(rd<WF!COiD7OwnELFttQa2z@T $|Nft:5%C6ح!4NcݪS\St(SgmvB[D5"QkWa5 Q2F%RTu(IR$R.Yh$ڻhܻ$| c+wa V[h,e-ז`/Mcad]i'P !C2Ji3-S+&yoR*y;JHKZ\w8)ZLO0oXkRAeІR; 3ȹw"Zi$MP$7j/f0,:T@&_bN 5)b| wS$іXNiC27Hkv!r63^A\@3eHz,yaT@3ao OF:U%Pq}r)JH\nmUaHY`SvZLY2؊aqtZ*sDWεʐ-2N36 22 ѹj=c*ChܐvzΧU`هT?*XlP_/|ACP)4L!B "`aB,^s{Oi!irr g t ~ zjYRIAMRǨrNۂS^Xsk4!!sM)Qh=ԕ=Fh[*ʈNU[¹[DZ&$o.I2E;Y7(uKAIu/nNnۺ%OukBB"$Sֳn\ T\t|)޿QS~2;m/fqlsvG`/@y =[N1E@=qGo[-kbwk/am3 P:$8'<ӌzŴ0` K0# 1w͛෢*>ԁ^clUݱM`qK^WԟT`i]('R?Mk9MNaH[Y| J `ecO! Hj<*.j\xC M[Rԩ;4Oa>gBbjM'+ro(R]vӣUha_|J;=k96ΜmQ93줹b!E*R!E*RTˆhao*P=)/"". Bz/1QIYR8ȥ~p4,pÆώz7G P3ev(9tʡs,9:8n6x!BJyB,pMgFsi𖻨M$ bpm`' )aZYokyƥ11 iBe8)ȖZp-hAHTВbqޡDX."؃Ut^DV<&XJqFj٢2jiee>w!G&P6Fpi4`8Rg UXnYh X#` d~[dJQ kL2<8py֬oi|! Yy \u |\1c[+T{wWgr {9p>jZ7&0%YkyO>3E'?|IN&Ey%+Ip]D8&mQzHˎIXIH6}>'i%* ,;'b,ZtKPsw4)5Ɇ[2q;HvL HHɝdABI$r`$zC0윈p9,/AQUq6^ iJ7g!MQeaX4boR+R1ޛHroJݟF X D%F_oPVJI[$檨gԹc,x R&@Df #ͧ=_ ]mͯQ~MٷuJ >Pbj\ڔX[ъy˔y;qSAA*iy֣ n"~"5ƃxkǗCYn&~?[Li fe,t imb3 0bBǔEvFhw`m}YLk+d6@tڴŕZЬsHV<$B؛X'_I ć.~XT?f_j.vwsҢjcJcMkő^4Su:6lE4JBtBD; Hr+.$"!٬z.ZK1YR ~W 58xt%Y;g7^ ]SG)$םpΫ J*0 {aοZD FR*?n Z}SxFH#Dr_\Rư3w 5v\ndM4 B!qAb'km@ \Tm- 8ꔸb2s3jyqiPw0vRpl*J@R=\$-õOLGM +)a S<`eU-GҎ= '&|ka v־Pwf,dΒ:n<,x" J[Q Bl9.m )jԹtFtrSd-4()h"+&)~AX6E ~[Jhm(u L"p"ΉT} )ﴓ\hQ x>%.JŜ*Sdk ZsQ8SxʴqJ<<}/ KUK\T:V̦zܤ3:ŭ,u~zl&,5z!Hw+k.t_էR%vʫ|mb_ (wۿ/wTxE㠄~E$A-b|]7„q$WIY9>K;gisp$}S+8,u Q \T^!"l[ 7SJj ! #I9<`qŔgi@C F0c,%K`53E&Sa9&8uFk-X2APF AKƤA΋!S^چJ3o $O V)]''vel¶eAخh@QG@SY:&IcbY5f{jV%^h1 Jj͐Bɞ)tMӵRj%?l *F=@TҾz$+^^5#%{CX!̀6\ YiwVcyRgX fX,S+#gR\W~vf{*Sw>^NE*}[lX,`(qS,{w=WR3.Qz"wY%5-2CMpY-Db%%ra5-"x[ |߭ٓ(ԲfavsM+Cẽ@]/w`OlhZ"}001L<0zS+!4}l*xG?u:x,o6LLqo2 ώ=7G@j?=M7!:9Sxg_z^8#ire\- {~`n('9ϣ?<~ד<=ߖ SsݪҀPd%!'EX[~>?߿h0~#gϾ}b }0wꯜLz5AI"\;נ~`_77Tw"Z"q {P*{<L@6u a&נ0p]޲UZopzmn>P9Ce9tW ;AIwGC;U yi2"*Tz0Jp> Jd<`]Lz}$u90- h{h)f|{yqu9T:o'\\ "@XQ&(*śρ#}0oS=>>.or|adz0vb~lvȀ(ȗ3˿}.ljV!}=@gޮiL\_5`+knH;>uHWs(,cN 1 AVY h fw-l( Te~YyTVe+Y0iܞ];IɃYiPtB.;I6[n(LJ3s0Ⴁ#`H9khs“c'锑) FjSrʙ9tݸ˜{"t= Ք7-9 hFU X%^OBNppHZ9€iWѵδR{)E euyÄ Em`|q&̢(%gJ)T#}i9/VoZI]Xχ{Sh?eV*x|{M'yy/r8 xZ%(~]75y›u Pϟ=WG]R/Cx!u͍sT`!xKURN) cVUVvc5K7ߏH~J /O~v )ȯ_|R|6hU y{!Db:Q)룧 J@:Úx!X\6Tc_t}C7,LEoygSBb΄BRȯQcGpL PL(EK'L> %T(o Ta[tO#׊2Iͭ*7l *펭D#'"(YE!%kq֨HHP o.C ZYRK;X 8Sež3ɠ(VV`β=Mr4vh@(GSM~Vse}4^z@:f0x+FsI"V<;ýu19d*O_j& KZ値v@Z8$MH㔗 '>.J55,P)HbAC`Gw;VbbgXv)dg";$-;$Q=@ ;̛jBA{!TC[9jt:jT;zQj,LKo8j<']ttH-zqRCRIt,;q:BuO,>GQo8B}E֥n(/tz[r#BVh5qFREofBCN@ʖjXZ| =t:<_ ۸S8u1gj*":=$7,'2h)QsϘ=㢋f;wt 0x)=@|< SewIs Tt kHw ޒ?_]6_#`OC>V*`v0 [c37EG;MR{o21y*V=ŷiRX=Cb~mqsS0Wk6_.l5ip+dnΚ+pjJxgȃޛj=>;뭾Ōd-T$U~耕? fewhCr!k%ê'ѭ,W_/* #R7>xz,`fuPo=G5\~$<-]B1h#|TǮ@B#]%wPwE$G \$40̉ǩq+a0?1FT>k%xcZ -rQv7XAic k#HkUR/mvf:kpko=3iXg.* (%xpQNVbUȇt0_&ė\x6rpXɑ>@=ؕՠ+9]GpɔqTv-w$iBz#oy`![҇4|GkSt&+)_-9v*bQ0*IT8=rT!udD%Ui!Sc07'+]E9cؒ)K)tBUKHVU]QU+γꐪ9hJU4NiPi! U=PpFtŃ2 <ߙ% _BU($JG()h_ L$ɕ[_ ))3E·;OSbBc"BQԘcaeW7sYgJ\JĤ0/4$SKefjW5Wvwz_<x J%^:N[*JxQKDK$ξOp> nY|7lW.f|'ȟ5ZoEŴ~MzKxD_K1m&l1|Kp}>q5I' bK7򑇅]`L$q4JFB)c`A!+0gVrfktK>QܴWLr>0i$A6$=1m-5,lO/&S2~4/~ s9DDfL"%-?E#}׼HC|.6]YAKb~6Fa>1?X}C@~lYba҂Iror+ȯSCi"oh$2:pwYC+k&O5bR2,1$`ٜ*p`"'(J"2< 0.Ht໳Z>>@ qܭPS|[N=qPA7FnyKdJH }mWči_q^ \ʛ,17U:s0N 3ʜ:S:gNcV*N{9[?&qdY/0 muMgp9]ﰿDދ8FUVyנ#B7Df .Bs?7YEn1Jz^*MH.&d6t9Iq^fIgT>\[34EMęs,mưp/e7{+fk@w4M5pXft9`Y;Qh鵌ҩ7۸cj)\Ɓ9M'DcՕg_UC6&g 3h3E7W|Q<5\0*7 rè7v7M)Z5>MJ󗴱+89o&+aUaU@EZ*"QΗ6VU@sêª PP Cbj4٪O]/$mq&VT&_{qI/j$Ey!.'p~χfP~*7˻eV/yW?|;4NSVMv:P㐢SO2)K)5HE0820kq^8s$ŋqquèUd]^wT&gǹx,mkϗ`[fa|\4$=:&(W65ay\5xtV ^` N`99:Gzi:R> 6kc40f}z'a?qD!j1kb?~AjUy=rJ-.Gԅ։9\laL$Qt{QjJ~ A~BY˜Fk*v?hsd<17s}TC~*;UW?%Lȅrdq>Sbk&8Ǭ-)VR"0]Fgp&T4,H 8 f{˂S"V)(Y gX7Urh{2: 8= ,up^ӢxJ~-FOVB+7=Ht~p=+IXN 6MzG]Wq Y+?~{vD)Hf16;a3eQx ӠhHE{2(vD1vN7 lY޾ G{~QBu,`n]Sbl ٖ|%U_ήuۍK<91!:r.>l"c0SL|%cD1&̈́ޓGˀk*G?QгZNiz%$SMMab!IQ<5e OH IJ9EHf8yI.'?%AL眓 g2 DT1RJ5^q+M<| ")R 4LRMHBPƁs\Fy"J|OCEDܥM\ Q5_䮅z.q[7o_Uqtۤz>-a(Q35dRUK6}C"ŋϚZjSb>V6cħ߷<ԇ>|ӪډnSܔ&K-/3"~l٨.m[QSi,RD&ZDh|hgȘf&J΃2N0b~L{6t|}jش ~o,P:T \s3 ?UmW\N |0śfVHQKTvUDF7QHnUM8Yq[REd-mbBV4vb4sF9at$H (SOB:q#yw+9\AD~p:Iy5'6Myכ!gVR]Λ\9cLVm7oȰxh^ zxՉM!]jUHɖ$qQyىP?/4˼[Sw[sCx08EJM՛A8OضLAiaڂ|rhYY}GJת]?j!l+& v+bhV*O."f~?Y)[HT /ډAV;z7zZx{Ԗk{[Zg½-,[&IP{ĞkewZ&|}9x2<oc[ۀ9qd16Cvc{`f+v.\Һ#  ڨܽ,Vjޛ'xs{ DN{]w h ep-B4ET;`:]j2C*g\7%dޝF)Ql"+M09rm3_qŇHFgwo3 7^ .4hkp^mږ:M=т2=Q]r1xlz 6԰01IYu*o!#=nS\7Ή<GZ@xjiq+B; u~Є^ľϭ%y4hu"6\/4Xrp k| BtեYkҒ=Xy mMMhIDG̡7GJ&_Cݸ;xkO%.HN_?oRV$1λt3Вh?i2#\h=KwhE $ 5|x9Vh p%Oe{do6^ǨWJ]wܚ":'VZ ~=f9ޞyyIۆ -FHlIt?S`Kd|\jt`-;x22&\ 6pkryYVOwGn2#ײ"݁M)uTJM¹F;)"$zHcBrm*/%%ϻ&9[D (T,"4=\ BuhGt!@RBxbxM::q=h|Ac? F*9xr ?dž h)(WL(Izh?zPs.6BlvQfY CxP 8NZ_>mr{5J5kHؤa̓?֫.Eik5.lM۸YQ٧uOk/~y[!JjUElYoYfO05"l6NO7dU7C}>?Ox~c;OO*w}jL DLbx8:AK(u0V(-2nԿ=mM%uesA"Fq:EhI$7/O޹ 0!ϱַi; ~jhsl'pŘKٞԶ=nw2V@6 wҲ*mn 1>b3y-fxN(`B wC,p60-$KӚXQ" _z:KN9\ehSZ t6ẻ᷃Mmd[)΍9%hEB;{T h=Fq ޳`4ޓƵi=P PbfmxFZQ6s{9B_~_no*A6o7?NU?_F''s]KyWcq\+&n"Ǒ %M{47r%QM7o%FnMc3T[oxNCPl**A^NW Y _ t)9@JU~$ zZa 4TB  0T2 "QM3ӣ_*rtKf2tRcɎ{fyk "'n26"ȁJ;,9BmNgs4/-+5j۳; <@a2J3Ak/(V@٤^I2p@RT 9;s~5df'dt(ͽ?0m2~P*^N"Z)~[| vI+KJs4h יZ(0a#АwvjuD'A @-PB=](!6p%G1AO<P4:wu_`sC!^Sڳ;I|YL-cBNKB2:S&TOhy`3=vfd j+8z \OcI9tҶ lW;|~0cvBn8Gm8d(Jt V]'b/$Pkʸ&։BZJ3=|$5,\c90P0c@H:._*|q"u qw aa|N4^xUJP~M95w ڲai6 ;cZ$"b!R^ރI@ *mFlZ$(N/S} bN?ֆohuwk;v2pڽ{@~ *A7S *>u[<4)'s-MgT@ǧeɁ.9PkWIMz8s24 (`ZC8jRְzϙ ~<%y-#;]wvz'fJ a /D҄LַTKZl UќH<W Ixg>\.Kw(,64q Z]{ i6 mfXCu Az!6v{@K0&B[9P`Ĭ~9c5Kֱa!?y2N#;q֣-_{wRdcۣ?BcavnEM%\Eށ00 JhD(T07jFVnZVT10dK/ځZ15HQZv/ɑAnZaDr)y[Tl-(<('OA{Q+}a*NH>k ayz]JLbwmJpdN(~Ɍ?tbNi3M/o:^@G-$M; eYHM`Y,EI4VXs4U *Iˍ>mhLjd\RXC-la49yrWQK!T}aю0~<}Lh+ӹ6CO|Qr^Y="j̵Lv#%G pzۺʵ]V}fT|"3k vNMmђ~u.$+ܪqeWhw*^M>"vԤfn5U%O-n+~0n<vca1CeTQsrO6JT&^Vxh5b?VJ o= B<,N$jQ:W(+= כﯿ}R,RjN?Fד\]B1@<>4z=&'u?^?λm5SQpZv6Z9tu1G&DJNӥǸR~àM#ߓS6ؒttx| t~Ba>ޚu$xHOy5ٷdX \F{H4z+U`Eeoj_F?L1Z[=dqkfk<0xUI˧ A󉚕 }6g՘ւ{t\f5^S/Br<*vrݕ1WSoռv$+ѣeJjU6?n:v$P0,ە=;zh˂VpYᦢsV55x}dQ)oU gQz:s<7/ X![ff:א,/凣Ԫ;# *e4**7+b4X=ɲ*&w[E.F8r S0i1Z`GYj)-  ¶g&S)eIUȋR!rIAhr$2o! C?qi-)#Z"]=u<\h6T{|+h#@L}6_2kZvY[7qqjj]w/D+,<;g=c-3Uf5#Q[^륿ά_[oEaoHT\onBuC7~sk=H* 5u!WTVCs0U"i(s6wyŵ6sUt wc)laΊhڼf=,Gܱs'H2sǎΝX0kw;wB.c\x'ܘ@i_~NBoNZ9{%t|!Ӿۖnڷ]5oʈԎS_^َs:C<"a>>i(Â}Swt@~b -v֝AD^ݐ1Db; _iJW XFUi 6}ddr ԝb#\U%Y1c*]jd6U2OǍYw#q \HEZβCC% "CIr.,B`H iC#{EYC N>"D"$ku o~yH+I SG&Qy+^Zw#2q< p%R4Yx#U,4rN_UQH)9`J@l vpBR=PwYSDSd7åDƔpo  ڐ\D?X-Y)Z4a}?]ǟÛ?A"5ANuc2 OMўMp{P^Kzo~ې\DȔXt,[fR>1FP˅~yÞ?Nc&/6ohNT=RКsuվ S L1JsMJj7QJ + Jow揹|Vܢ& aD7SWLڬofb0q@]Q"V8DUh~3ݖMv0Mǡ".7nm.YG5bJ^C@hnwӌT9?ގ&N' !|8D1h)Z>zK/E[\xm|.l5?4Z6`xVU7WߴuSߦGrч]`Ge{8$RL%j:*^yT3%kI Dʘ@Ɖ9)KMc \h׫eXXVHJkXGnc mQ#bٯVpmT:a(o4} s+W̋XLHflGHZ5+6֗` HR پk{IX3YGga RKm9_-"2HBnd#ܷ݋6%%i!^DXgL~&DF$Cx.e!g^vɪuGo.l~NhIA{i4|q!'nk^dnEPEp`giYmAfk}@QȂJ]mZ5n? 1F-?睯6<{1u{!JˈRPAֈVO{_v豞(=}9f&`n'>b{tkCQc4wut{)F4u]6:%_p{%K }hͣY:}8oD:c:^|!V4ب> {$֭4M3ZyZGB0',8$} p[9trd2S ҁ є ayn5=\N1l9.1t*-3BêFX$;P``ЅY.AOiIX GT 嚰!µ0ZJkh\(nbKCpP |yNqGr`vr_HDA8l;bplpqH,P OE8Zw0RBH%jL]H Beu6"%%]W:oX]$uoIeT||ENBq"IouK5_r˃OL2:meoy!2e )PBsBȓ %˱*X΁ҏgH2lw N 4!L"2M aN4S FI&D@Є,滥bSbLUkjf8rн!bBa/AQ'0b|n7^}}OR3DcjL#Gct/*X|S'Ȕy]_=~?=0+_Z3|nt.  ^b4͗N[-.3fgdOX?o&5j ʈ0-V_C1P 8#s2>hNXc;A /_kڨoUKbFSXcw#^|4u,=$󏗃On^,0{U`rF3&[&.9mްxdI<hDk-H e46uHw+L$k)2!HY.iri?{ ÈݤQ ICأ|199M5Ňb4]fN8sL' 幯#g!U2elϛ_k>$Mvp[N9>Ōp@3}9p*H'.!4c mkR.톟 AGYgٺ$cjEeэD:(ی ;+ )A!Vw_: –pB@𚹟m=`!P0܂@JV00Oqg=ѳۄ sYӨl K:H#P8XlBժi|WoOBp=RH Z =P *5 E0U24E)ʥ$ 'B'f[n|DP,@@Z)f)&*G))Ʃ"I _AJaބB >rFٶC$ D.Ck] i/K;jpD7b_J,K9t%56n0% N jńj=;D 'b}Oq9{=jܪJoT>t0n25o4g ˽fͻ(h#SqMɌK 6aN%QV(oV{*r;oWA1 $=QIY r]0Tnf*:PIL pZan0ۀ]%ޫVfQo]{H圇ԧA}LMwf3mԙ-Z.#=bH7(c㵾uiF+ebOwvuk]`0B+./KOW&B[G&9 }(K>Pes>sLg}7 2(&1rC%3E`qN=ck֚fܖ T'l;CmlKĐL7 C #Fnm+1zsNz킘nYYUZ OgvC/}%wÈrE Ծu~4H `ag+ 3` 9-TLz">;RN `B@sY9+q$G/ N5~]ƾ-XI;TUT2SY1-UV}d0H&EPIKLj!_䫥I_RsZ? r^0FMJ1{<fD3GKeL,rAzC=UBw_2ˉ2Vpw5[a` D V)ǹ"A C2BuXX.G I c%x*)ӵ-$o Y)rau#(j)NŔR{u(hk֌@m*&R++3,m 00MD ,ڢLVH"Զ[VjmΉX _0 ҨQKwKүtPw;`^Ffk~ZJ1ft?}^ IB/`gHB}Ƶ.+zl)"G ISpRr1%SSZV#ttНבyv:24C;=(ݎBf8|׆no~i0`Y>uk0ۼY~ 1S߹ёf Q,ޛQYw$3jHfLj{& 'F/)C~wrURuvq墯|>eVP$5"1Duy{N sbaL@~:\= Dpҁ Rճ}w2郊* TtG`v,6&D7-9]X`RS<֋ t^pXLZVP`iUÒRmuC#`6rZ{7԰K3z*fl{ڸL;yz_fGX7"#7L %gP٢|rEt-_Οe+,/P<9B _qy_LPڴïᾰ6~eI_\%8JЈ+f! Ҩq1ʿMK7v4r2vU7} ՗/I ݈}r\mI蚫^$WΫޙf֛i|!P5oGݨR4]ףRިRw2S2d\ם)H_]UpYNy䤴pt%9vj=wH$oqLa>1G㺗qB¤}ˎo:,:FZ魥K%zm]`š N5ÓrOu S{,LP2c6 @jeBPr#& - mr祂Rsڨ[ރ~qS~b=W#OP&x۫^Nc-e;O}݀q;5& 5LM$B6kA_La;q_a.Tބ;0Ϳ<`){0kƅmɕy0)XyE3=iß,Ǐ ?VI>~ fˇ?5{|.z&?`C|ZL"X cԨ/ *kb׹8I~ =[ԼFFgW2VJ@Dvug+,fĚCPϴ.`]7BB$\#iO]=}N8k>Z׺t2N+AK$J'CoY\:Q\-g 3("äʁ]tev[eX0qT0ARi9!F&¦] JX1g'n[I@"x1-?Y$ v$!=0g<,qT[5`ۂLBu,]\+%2 0 ݪ(PZ`%t? +g4L t ynyۏ+D/y6?E0x5M}#g}z>̭N2yվ0}%YRKX6\!!z\5Vu+ر<|B)LgU0 {+(0jwBK+)!P^q9%nsJE p4 u"R zkj<>#ui4Xl VU(|KnqPZfO˱ k?Z7Bm(a6K{ W :p1_FK <86±:sbrr+q8t ?imRd!y씢(EJ^;RD.2j$*(T:VNs" oT3T[0&@ VG?RYa [L@3lN=cXhW$фp<%uzn/*Y22 l,[|Oi!FR{5W4?`bI;; =o6r~0 gquMj"]gWA=R#1*]:J#.^:up4W]q.pIc !OLn݉tb+|ϹQA@-^!u@ CR *) k KC ¤ܼi-kz_ IK2$X\Ql*rk12ZJDa]ޟ>y$y 7=J@?Dk$D͜L HcƱh{I\ qE&X)ָx56/D;+m!_?(L`JŨ$=/L [⹡wOyFv{ }&T<(!Vt4gdr.!_z8qS%y:gIsl4_$^+xZ4pNٹVOnI֒Q̘( X O Nf|L+gAtij-8tg8ӳW$hw$5[2`bwO,`w(Zfk>"#uw}ǙӉ 0V.}>Z] }7W۷*9Tx_ܶ2bO*Hxd":wܶ݋ف,DEGyIwàǑsh$MaYmǍ{b^y 2(c[9h$8c5lYGR`c/3Wd7f_rrA DZ ø& BD;c$Z(FYZE!W}C^mnKor>uƈ'9 谅 3V)µ9bb`9+Y(ba$&) Cʔaa OFsyQ3 C+BKJLIߟ/CX vrLG}VHQJC0˽rQ F@xY1Pҋn!eP Āq&As08\0Fb_G-Cbp4D î2789C2`CD1AaT u%g;YH xv+y4E~xv-ivB.ʟs G:N$iQbd6cAigAAR>̶8] 觕36[v:.^Jv^KQP+Ϸ"DGZa)_lilli91Ƴ#RѲ; f9W@})q{(씤D|zF9ˊ~{ zt"tbsA=/YkG#XfF'Ja&qg'C.`tŤuw x\FJB9 AREBمƑy #LLΛ0/m􉂣 <3?D{{U65ŏ57ր.~|sU6q)3c[+7&dTG}mRzT_jJ}L#3${PD{2z'dSl;}uNì{IG`ݾ<֍~2~GNxO<\>yu= G+q@@yA7Zfflsma:F,7q\}qH4{x~?k~sX Ղ-ág[M u1oND6kQZlBmHڌ݉\7xߟsnwЏ&Ok6`BUeOl gGck*yP YGcPBׄ$<ׂOcA.` r[puCѻ!#}T9 >|wqbE{7X,I<:5̡ +RI#49 ZmjbD+@v-b+Iuٯ*ެYbjqDecIvNK0֞t-^JHc:R9S{ўA>b{?'}TODINۓ 4J3jKC@28r'Z:ŧzQ{)/j()ZuNPy m|a]{eJV'~o4!u_:JzoNaTiӴ>z(jtѺe V1ɿ"50nт>H) "mr py",5lqv~{M9QX< SGP$ΊF! LKe֍@i4 D7 ^( #mm5 KKS(ն|BԠ?ZQj1:aA(At@0Кv~wocWՔV(&۵NMrQENF jwF8z[o|PEyiׇn=4^J~Dބlmx]t }۫vL3Xfkq |Qa&ƛ,GJ-r qB(If\SLV9bGt@їQöw"h9ZvFݎe}k-DjʨS8̯r wR"ӝ-KӤLI`&Q ;;h<)sϕg[aIfAՋ+E.,wQjNj̞xF?,uEIrA#51}],8a7!:; WUWZ iBXZ=!ym ``F903Xkq?5ӆӠvCΜExJޟkb[-2;F]<hi{i|.QIV9F׽i7e?hX ʰNU[@SBZn5=%[r,S9ڍ^x HEQ)fa^yit^٘i r,S>fQkOyJ'YKRtʁUC7ϧ|ٔ// Sdp50kROʭLUxJСEYsR^ëU8\ ,s`J9[UxJoY2wSpHO 5(傤\Z%9>SR9 O)>e IR9Řkb̜`R9Ř5\)Ɯb̵J;3b)\$Ac&=)\$(ccN1:%cË1S"h1s@N3m3bgcw /LU)Ɯb̵J:^1fL1cSRrx1fFN1cUQ^S9Řk~ue94 N1cUË1sT1s^ ^'9Ę9%|x1fަbgcb\tD1+ P23׊s1) !< s1* ŘpT)Ɯb̵Jcc:c>* $OWb\˄ƛ=wg`O ؑ BLrq䜂I1dИS$8q0iϭ>tHd^ߘ׫F c[zۧGnaV!oe^s)SC1+r+^xWLc\c 2o`qѯX 8c-scϱw[~0sx8ч˿q^t0G,G)R?$Wz#&.{JliIZXUM7Y"w !ýZi!$X3)_ʅ!p|*jwB -S(0L*񍖀~8gY^Ip9OjlWNR* hS$MRm忟ƐFHbfH۬XԈ@htfo tm>_~>+aч^]Va`a;(ӻ`&cD(T.00rYdUFࠊyUTs]l>ZnH+;;=u3h;03EzΉT`ǜ<bsTHəU)3VPǑ>ѽd0e+nofF.U5^K/~u{00z [+p`ަzǣxs bˏބףkeGy5D .n`_ 6 3X@_O27,ߗ؞G-r|qhYhih-XJ(`xfik01XX UVy)#{-|W[8du%׋K? /wp1 7$! @כ.}키{cD6rZ?GձC{7-Wo ? fxOʼn@+\scbǣ^860 ~5xyimB.Ow7Oe}JzFhes:\Vj:wZ3QHai pܫ:>JUOb)V;OT̟2P@{ /czŠJ^@!dJ!kshN-w1/::X\΅^!%! U29tpVVaSdkBVd mwS.eXG  g K6q^{Y)׿k֕@˫".檷z{X|/Z^h.O )R1)HBi}b݆Z&V7D֔F>f4c[*1K'u:mW(O֭/є5ő:xyݺ][*qeƨcxNrW7DW\S jN;.Xv7ls}Τ!?xQ' Z 3S6Jآ0ri#K\;T 5y>^EX")ZIk328ݞoA{'rSP1rK(azQ" =w/f6\ =J)OH"}he54jHiRꋮa*+ ܓR }8h\]n0)Za6{.)m~1vR=wer ?߽tqZ>,37_)VXoUuŏnj:V~"ܘ#O0O~/.$~ϳ J323y@V-c&&]'MOzz(EuG}0iIHQL/H:vGl{7L]!Wk]IB닲α@N4#H]^ౄI1̜h]NJܻ3ꍀ$H\Eb)rQwy[k":7S'7g'?~~r5ONO1}gJjuzp~"$n`>QNFVo!nf!Yl\d}z3!Tu5~Uw,BbNnذt6B5ia4@2F 8LJfӍ߼o~"̜tAMTɧ?h~~h:4]=]3Gxƕ;fy>zݱH+-U$LR Ax +Qi;ï呒>ZReDNn({X|%2EX4E[eO85^vv?@a{oE?*EP A.ʰ;3N ܆n9ὕH3k2*N:Z ޟAKgES-J@geCl%/%(w _ͪw^sMQ#Uy1fn%H)t%fL"'^H膦BI.- "eTՃ@v*RaVOl%cw!#D9QV6w$~Sgܲ|`w|Q^U>/v[k!} B兑#alxO %2azdWuBE"Z2\ibC4:݁qAfZ`[Lhk ##ux&S,Zvɘ,6׏>PyǧRFCH _ܟ9ABy'QC+jCV(#>O5N_/:mˮNQofNuHRWR[o=nxs 7Oj5e1FQ 7US|oթ8em}`&9lc^Q7B©W5%W)Yxq kyE!E!9pJsSX/(]x+RU`N[zCP\MB$UWj&fai7xQQ]{eWS@{IkS-PgZB.Rw%HPNB[TJe OC fȐ9jlg("DkD81ےr[Cڑ{b+m:/|y]v [8RzFPY9I8*)Nai&hw3OjMb6^q] !P.^5KoP}`}iݙ*]p/=+'}("[ҰYB/7/)(>Bˈ"  /H?"2\epᢪ "0^`l4΅57ۂcfqrZ ZDT: k]}URZ|[- #4lH O V>҇N| $QF)'4uA@hKij dX* M^+K5# #,peck*؎{xOAjІV;4T|idUd䑗.փPhv2櫋-> UI*xQf $Q B: Z) Dp2iabVbK|ŦDrOq\P%%8w'&9 w[id *\y~uiZ:" FK* muh?)cNsn9A<|#)$҂Xz48+n쒭E@zo~2Ƒu>(WZ-%J"5b JIU:iLtvPgF+"q)Ust) *U۷G7~)ZγO,fuCJ\HdPw+3E&.ljhdǸ̅TvPZ=E%*cǸDtohX|YyLל;Ŀob*c➤PO;l yez<>QAq񦪟ͤW {i,d4 N1FO LyfT2߽.eb>\%!)ֲZ%xww=;e|~5sMn"dU9fvӅ)ߙ$}+ټgYYo/ f|^/{'O> 3$BUSXb?{ͤwR*G&&< (r ~e;)ka ͛}Gopr_(!]p NBk# v7q~rz5 5FPЛ?_| 䟯nX+7_잰_IKCӽ.xu/U7 a\pIE] "jU}NԚ'u**SɌMVL+ߣ u5($:Ge)y}4W&;l%2f }5qɱʹ"cL$2̔A=ga~K;P7f m[@ίDv2i}G]0+v`6JmdYT0܇$y OBJ1ТLZfzOѽ36wա8FRp ;ΝUf|7}+):uW+Çv],*5B_`"[ƾj)"3N&|?uuRq?gz_,Zx3nn90HvX;쵤Lؒi4=$c[&UbUh]2ȗrpQշxd5 j RR?JR$$R>/ԅ%tބIÒ&G4o_&n{̎ /7L!>$3݇dr2z:}N QXRx{<gE!Sң}䐓Cx&YiV[z(sSbrtQ(4]pnQYNyxMƓ,چ)կhP"ᜤUd>ASw /^n* T1h}tQtܧu2 M֔<rsKtf¸4O0\9215=5>c"= #jKbTȔXQM(/(k3`9'*˴@F5g6V+R&HJUV{ΈǛc4h&†Ũʂ8H!Fm34)2TP;Éo=0T]=6ؠ:nc&M#.s":rTј$@Ҍ2sZ`RK5aD0a<Ͻ|\j2pTDrTLSiX.39OErUrB3L НZ뵒7/lXЦxdc [d72.'P7Wj'Y n$ i(Ռ|p\ ƛ0TdžQ,o(|LUN) Q<>2(7=p!=lԇ\ $Y.xy%=ӽCXMFK= *$.sxù<72iꭵ|9kqS"޵\Klnf֛K uٸ-tvTRլ&k-^|1B%4.O^A/$o0Vp(_fh/ZnC?m>~Skf磂|v9 ؗyxY.( ejB/蛇>.BKtjG 2D>C| j9yP%y0ٔxnٻ:OXުxp|70ze_f{{lnED$ML(`r5GDb̸h̕1a 0B3ʔw\IHdfL=4\8P)B jsS]e!(qD{*yZxd)t(́)&$DoJO,@rDe2egʥNSqҞs`9Efxxk$:ZJ*e'\pGuIE#Vmˀʜ(;ܭ,w3gVHZARt< rLJP Ta3$ Bs P r6á² |Pb^@Џcy[5&C=]y_Z2DZtcJ&^2s1Rh # 9 B05A[&`r^B$KKQ[07Sk@` C8?`%bf ȉb(@&v. $ NC 5X-1N*QA)bc xD0P[7`eb P3(]^v,qS)`q-4EAWJ!BJ@ՐB6%גJ!S*HB~7=N9 !:p۪1%rbLTN<*ayR*zjoQ\Ѡ4%530rǹRZW0x[9t-̴L q:QLyJxJ1(ik<˨ 5 \ V eI̭Rd=!+ :M"OSCJcwth-ŁM0)|&#i~u&qӠ-i-B(鄪 q`F CZK~ ,n\gLF&k-nR^d5r9*z$kMz 9R-5FKm9Fwm_3=h.4.޵G|B4>{.^D"!>R %.K ɵ$I1&d;DZIx%F|O?xpF=(p1'Tc{z.2`;%ntNJWgx1u?b o1_b8_é.Z>wѿD,;MW9ET dށEN?rS+~B$cy|5kp=| B ۖD݈2yN3& {'\\f[?ϸXQdZ4܂rKF kע"{`?ׁ3Vb:V=u 蹜.lNjSv1 RVc7FN}9g4NӒ-Zyȋ$tv 4U@ bq{uv|ׁ. #Ρ3_$ꖐ`+)BiwO+QkXӥ/;1ΣQkE#Bqwh0nJk MoC&CmL?dH[h:7( !!LW7}P ehԐjp"l}y1<4$U2NGj !1Dy?7_Vpӥ76xyu=աk?u4c!9tK}.q.adН{QwsLsG0 -E>pQcZW!ѹ07  GzIpy4d LÅ$|UԐI, xg 6-#QFb<&IR͉N8eJDNBlJ/vK;⡒RZO͛fR!S 0GzQJ\G2~[L))$򽪍\UQhq:_G<4MgTx|=Ξ3<=/Vf+ 0џh\|\1./v4{Ƒ 5Al6 Sù٢?Gݚh}aaz:bp)I)t 05I3B FtY.-_ xMaMsO y71`!&Y) yw(O"J '3#y*<4i~IRҋ!hZѻA!t NTJ,W 41r4gKe5S &չ@#yIa”ҒwtK!P~M簏y-:M0לf{Z3s_t9QFrFYsFHKʲDgܢO+yM1DyMI?#e M!`Y1d]<縗5ar9l^JT,o(?^4w .˾M18V܊Ym>޶7ql4=vΝ5RO]Dl$%%Q'}awN\YsN]Dƙ6XSx"\8"}I;F&q?@ PAwz6Skxd4FDi׃\lB8 q5h# bH$TQ'W2P8X:8(Ȝ C |0w@ i~Df̂Sj*۴ҝS@G:ݮFA2<\5 4l2q*LgC-1-e*p_YOB y2RZqʀ?N/<lR$PL 2 E(m>CP :L F _ВC* 0"(#cXK%/Z &|I-b{j; ̔twx5v0َO0¸pw865I@L=JؖkCUvc{#Mrop6v!ί:\gMA*0i|~6Mee?gխZ=ѡ͋NUe7!G˳˷Yf-@~/wQN:˒~<)4v%kxbrHSoȀW[qu !^SV;D`C 7x^gCg]yt:-TP?MfԖ}jtD:x{5-eqP<¦t;&z7PuοA9UTn7Noi9`y_Ur}Cb7o;ˡM7&&#8H8ҧ``;҄Ti'{yw;>kI[eRKN$` zC(gz}t:0?|t3d`/߿ Fkvf#>*^ż3Lϵ F'i܋Ga^V'5Nv hYDlӷs.e"zwMo*ng,a1)g ~_Rd;6C?fߝX,ۇeԔ6"!B< y} N9"I3<$Q.)V29P)Cx&+87\?,_: xa596 M=meub3Z>fϭYskd5S:hiqA'yI:ܧ2c }Q8̧40E4oyײ tμapz4@X Tȶi.Lvҍ)띠6W +ڃ%1CQҘˈG$Ҿ 1_8"+FZ.y YBWT*D!kS'/TD@^D(d@q .I~JwW0ݿDlè'}BMČiM3UM v@"Q٧)€8V nEI[%)2G)ϗJj)IIе/YOFEzUex 1F]ݯ(2l 1%>|x MD rX[<܎bl*LLE8aYF8NޗmS؛NHCb,8obi0tiy3CI-DZ:?Dg =8e]*eoh ix~(nsDCa l}[_ 428Y:m. M9AFq5Vf_XW`١e:3Ү(w/!~1)~*mB:kdOYl {EGi2R,{ C[!|Dl"|`` O AQL)X&]U$ &_.|m ߲'J脅 t嫥.Wa[ܲ7_7ˬ"6Wiֶ_.>a  C؂'w6ps#XH ;@ l5`mP!ͣ$UmpR_Ƒ //?DL$@!!Q#0 DON`~f>b*wX;E1Y4klH%<1߮S HۑfK,C1Ic<uC8\}@/Bs`g00q%{B6"KҤ\m;ě]9 H 0dĩx6"hPqP"Qmy@.9⹔͕7vV^,V;T([\ B"Q6ffhT(HiWqb1apD09A|X=K6gaIN{H/;PZR) %lI+6#S q.z7\KS T*Yy,]w1lw7<. !L)7*0&,+5a{\f}Yqė؂YFC t/9#)C]4%`ƈ3ˤ-L"4>fK kbV #ST/&ܹ.Bi^?wsm^|nAj"rvb  F>]8TQA\,$WpGI]7-Ŀ,Yާܱ7)Vi^h7|]S CmWBb Z;X)W$ w+${ϋE18 %r>}h4xkd9;A (Yx9Y13|x76w! >!葱ad|(MMw1]țT@UʶBnrM@'d7 xJrKR܆FqMqL0ƱI]%8ф6 h@DۓTzRQ2NrLRFBq}LQNO6]i@<^yAyDY`=~' #C+s?U'`o,I֚DQ'H-BpKQ Ͳ"EL8HPXywbJ{ཐHsg@]N8;=ɿfc 4aLU "Z}?G@yhBĻӦ\RI9Ĵ,Q*?_/jVE.yrvX|_s2n P.f|f˥]N'Q'g.ycf˒YeBlu)1'qZYCJIK!q;u5Θ,=qcw>͏QviLH]ӥ/mҖnCLO]*KD km]g)вt NR c]r"K_Rnb]5urF+4G0jg|q(D+-I>)ЊӞd4E 訂u{D.y1 (^8f,&88Ő#e@H,WvkBRHL' 00hD p@@0HlrDb*5R0R= )$6r@*@?M$^!Zc0~ rU0\g?}N%zGUR5dvM0&R " p&F%߆oCFBMg!C!k67  囍RgK w^Ewӿ:6ݞbx{*dJEީ3һ~Rnnx2Qv~I^33rUFVTᮾvAze칲~L[{"B ;Bں#d*#n)I([Eeʣ;DzZrV-#nfY%l9 nO/$Sd 8iaGq t^B̑򾽏㲋hƑbz :b@oK naƮ3I %>MqiՔ`ޟI b{U7O⌫ sO/x7QdF/8Yߏ3xds4Y'fBУ7Q+Nѹ6BPt N}?ܶ֙3׺?5 ܃- mR җ/K=ڎVF*$ܳir[Hy`A%r?bjDZ>fI/ʳcb3"GhcòO7Nq3N1E69m˜E_jgYs ' ~qi+|Ki`Ox13>H}bY2L(ډM:ц,b~/fHDia)$CSSN!)m I+nGUuNqbN*НgS+iatUN؎/k! JiX險iv FUGkUmi}rqzT_OgGH҃g_1?;;^ݪp czugu1zx,/+JT(Ǣ #er^>f^!hv@7 ( <4YdfL X;'C5G1M_1"ܗ y:LߑY_>%#5,WqzD5$$V6dR~j -]ɷ鿏08Q0uI{G}Q̺w*ibgd4IMPZRY8_O|4S!glWxEvOfhrG&mQӖt![\&&#@/,7{F;r m5ƬD5v Ģ 3=ە锕UzT{T*ˍ2J" ppEP8DPP TU^l(Km<(2ynE2 A*Qiz)&f^uϹbX4LYQu-\p=Yo(&P].A2+܋UF,xO@e~9Wb%95\y7^.M. (Duc]h 6W>b׻crY%87[TJ˥%ODT'@Elj(q^edO&Sֺ-Slkr.c- K;cGuYoatuRlQ.zAiLA]4?֭,SZqcR(]͊]"!o8B(rVbÒ@®"sbE82dJ$m.P立/l2&.nH:l& 2Y8x]!{ QeR1L]y!keg| J+-U8Ƃ{Ȃ@5F0mbUlI/8C%ja`<Q 8 q8Q0H$p`}zrш0x܁)xG1B҄ U6X(h0BAP ]H!&('h4D{h>œ1X$Y/WpQr5UCLIj<ʾw5H$$0//|  A?$7a DYx#C$BNBېބ߆78"$0o6ja/2ySm#XO+t{fǞwBbw=ؠhwBmG=(ov[}?Lv3(_Dq׌ GJ#XpW8 _2\Ys?l)y}c؀ %1k`PjA ;?.2켿{ 4ǎnaSG΀nG[nzt^.&}`{%tCIWv$Zu辂EB*Ydᅴ'qFe񊩋 jN(2|T^ x9ՓfJw1E>"Jd .u/qHr37{Gޢjֲ5ݔ(n) [>xrԊSAA yj(N}?ܶ֙lЏv@ ?~΃'V%$³qOIvOc'BÍ&`KL΂VbI/0KXNQU=?88Er9C.M:˔+x0Fx>ENFHP,AhSH5HjyW%/:I;4SfR1PxP 8ZM:Q%YNqԒKnx)8NckVhIzPjɿխ ?N4c<D>"l#5N+^OvTSPe.b׾8<=2 n?~S~;Zjd`<)$Sfz;OD4>V$GZwG$%˭,$z<9wػMfy:{u؊^Tid6&WqdmYXnkL#0G4ƌAw6Q*7.M[ embN44FIA3$GEnsϘltlKУ}?ڣz MWnVY&m2 hKv9W$+9 c  6HNgpmk)HI\_g؆X09*\ vv(cp4"@b )$A2!eC6"3: Ƥ"T`8H8FpT)8YAɈqN$C$#FeR&"Bo ɧF[Hi4$+D7qԏ"Qz J,VQ߮ D !AD$|ܽ׿~mH! ,d($"|"rBކ&|6!|Q )΋nzT3[W?ihCT;IݡF~ͮnx2Qv^~I^33 +b=)c-ze<̲~L[{1P,4v0xP'Z߮8mX^HnBNfm@q `Ÿ;jrPZ3"h?r[/U`DFWhHBUI#ʄ]q%Qc~PE'.F*m _8;a*zp`TUR*X f8ΟtaD0ZvRyt6qqLUĖ ݯ=/mG8nGOU^ͨ[-b/X#w^N v&Ƿ1k,6{4Yq-$,? Bo;Z!"j=RORfuVv 4.Ӎy܌SnQͨDmvx03T7ʨ܇ A:Uy}o_dM#*e]{sF*(ԼU9֎T:c`B\tݯg@ EJl$8f~3~="0fVSJqTQy,8TNjPB݃$QH$M6 Z]H&hbgO ;NFIFEhdIhd I-WNG:BgVxf{߄=X߃׮NUAY}3p=TK;آEm8jڗ!F]+a ~5+hD+|el"U1xx78LC1҄IE,$EC4BRSi S)~tvpj )L"6u_.0̖l1{50ǥ7ǓE醷V7KFm`* Z\ft&mܰ*xhfhm?&…{ *$@Ƿq*|9f5aͅ]rܶ_u oGKː2e.0ta !~(~{8Iv rvb<v$9 lOa\{Fa;]9S^x O?Ұ;2Wn;NGq؝57xwnNo(9* قSҫ$Te{>@˿O S_*KW)W AX-TNaGg=M$[nOD[&f>OuI +rJQ{L hzY&ܭM\ߴj^Xm(=6eJl{1hzG3/jhWf8r$E fnh@F03fN-On N tbugޖ_AOٶr-1!B .cXm̅ މO&0]xL qyF]{K2L:oIguIP!is˨4ZiT s`yan_̔5z&xoˬ% 2ϧ Afgiju6OAWO>mӵj_*{?kOZ)3n͊Sv/쵕qM^g3S>A{qI.-Ӻ uߑm]Uk$W+E=JTΣFT>g6?{zn}}jQj}6Y!9>@bNaw +>w_Bd< $ ]Y;}aR;չ_{TI Tj:p9e(Պ_`YJt6ҡU&kN;~8!me>iy2BbSqٴ̼kJˬ@Q$YCNjQTBT=Ld~֟x0.R1F9]'3vW̅/:B$5Սqɩ /t G FŴj$1PEJPΖPbF7=CD|B<P=UubGK㘧 M0;A)"*RidhRq*wg9=T%YNe>iYNJɅYNbSnNLEF%Mo"r-ddsέR+.D(22f\8Œhh4XQ`ahdh hh1\H?6m"H 6,d\֢P q9/G;[TQT8ݻ!C7d%vaHy7wCz7wCz7wCz7wCz7wC>pCV*Cһ!5Ha$ۮEA_/ /BZ Fg 2m/WqhyNn]ZOڻ~hlWODQ;6nnn,. N3hҊ)ļ)ߝ7"&7{S7{S7{S7{S7{S)JZ<{S7Hc$mmN7屘7Ol_Uٔ6*,<لn7a| TWLB0]zσ[9Gh @{?WJQ/qvt 1: %n!0_mQKŃ`3q6)Lt<~ 5~r{TdQzqzԷ~GLI#MB!oZAQ4H60LJ?(ɓ΂0fl4 aKnQiNiG,eEv-ʥڻ[wUvQ{wy{wy{wy{wy{wy/jZ3]Ԛ-sGORݦݦUA 6݌0&$vG1Fmo\oTmʱڱ20:bBS dߟBS-/{dG)5&x0壸] Ib/W%< 8:;3oQ=?Wr幂)F2&kXp3$bD񈑈?k[F!`P# ݝhVw'X.&KQwN5`vm硙TG3l![0_| [8z0 >[?E0psj:yP9N#-xIZ)c+nZpJͪz J dB|>(M"4:Ef"HDR!2M0F*B$"Ă1QTqh'RMe ՜H$C @$a1AR 4Y}ضR䋷vk6BehgہmK-Ў*_:=0uu=.y&?I@o魉Iq7;q[WN9V Ts`UvQA1#}Ġ>bG A11XQL`1#ݏn&Zv k;!G ٠k5°]NSN9A?uwV̟8۝gNp˫,0 AH6':V fF0Fi3Li2H3Jk$E Q惧(ae_8L|;đm8fH/Pr Egɬy Oqva+^Jxl`D膨S ̪B|b+,ڭK}#|0J(@AkPgT}u<R^رC[l!-Ȏ TBo(2ܴ!55ڄX0JMDR&S&F cd"Ӑ" ky}^!{*h6_k@/1± "PqBGef)k%aHE UaD( KF1ՒL3 ҡ2TƑRłsb1(2iJhDx%`2=PP$WB]ɲ_-*A0}l/ƽ<j$HR1!' ZhuxV.{V Z7/6m<* >zס׼ǭzNv>m\֫svxW.-{ >xWKE_crs7Mrt^D᪥0Y4Q6]ja =^+k+OsYd-퍥`x1fǤ/P_?*ըpw1Jf|wc ,d$Bj & j*F&()ha*z7g%tgv7<K(c{Q8K{^8Yƻ b$"ъw| X)w3߯TzhGߣ4jK1L^^U../;f>滔OT)/X,_WS#! D<R\j_*rS16I6Z k "1|;-Q"ђRhuBeTܠ8E<qv"J&I HzWQ"=똍B/=Jd . n!A0UVQUm$&R0ĨPy^j@~ݼ4r@ ~e[%m -T!?#sH>pxbk- q609af$ [rq-V[|wխ;WOxI~ chYZTEi0:WH("GuN^}hEV)pg^ XXkRO,BqIscdVX*7 i|C yG]b8*\R m1ՔO1jd1Q)wXVOU.O>EZ-<֔h+d8# n ISp.y&0R=XKl>pO &԰/]yoǒ*Ⱦ R}z%':改HmEwJR!9#a@GÙꪮ_T-wZPrnQjhu1QaN0ُ/:}% NjދOaejs!PCE>/ WKvK.EBfIjm @Mp6Ž媴I=~G8 j3s`^Ғ&sq0v@XհW1~}8.)~?&Bů5~<[f>cjވdm%BM5?Fڮ##4,U>bD@v]GC؈K4,ۏڔ F]9H}3^6&%UqcBD|r/\  5cR(5'lq V5DL0MZxx՚ȷQyysڭN/BҎoVQBBn-S5ݎwy†<|ꘜcBocxc#APS@f4/y TJ_LFhvkN YJ)Nj!7iSTae=i}jJ']=7 {OENk;sLK&QNUZJ1N)8U 1#Y84b0HžEmdrݏUS_Pa(ixfP)FX3EUBkMU.ig29 SD9SSd)9ӺV#HdsST,O53O4%bs(G!SLF/O jLU1b4d7An=H(A&)OH}ܛH냅`$w$`tm~ t)8o&Ez1/RT|4uړtKʬ~DRר+v5clS?W1rN**l?jP+luFU *OwGըU%% 8 y"ZInN}_)V.S>ڭF˴nS.$䉋h%=p FV.S>ڭN/aq8ۭzvۅc^ާ$FLf5MG鳣 t:Ꞝ`";.:)j1~~kZw؍~%ӻ_z0q&\{>HQ;ϪZ-[ݚƀ{vtK5a${5edG `mFEڎb:Iɒ{k<l2O 9:kyA"l+UsLu Rwn:ю!Z qZ?8XI><ҳJ\uڒRhox}GlY+.dְlUKfudS`r8uφ.!6 ЪDV+F= Z"0Jr&, )11JH;)e8Dw)mc4E%Hb CpFHu89עqۈc vµn# "K-LJBhsA>X0L39ꢖkJ2SL0x6lV"/D^0yQqoؖќ!(fAI&x”I44:yXZ3h0of6o-+@D)A ۚG r<؈#ӏj4V~tHTT2+a(Q3yt7o3&E+e,^WZ`cYAy:uTE\ZRV,q@>*7Ȋ²nfYSuH`8~>@`N{p<5m.,3qp25x3J0^{_w¿'xw4z&+̤{ۛYnv3P -n^{h|mnz Qnd/ܘ? 7-;0(uпNdzn68,Zs~ã7bQtiM.g_F7`'Oݛ-f2`?B>?FJnDEn)"{vgX|WԾQŧߜURO~tx餋ѱlΦWqSSRq68]|-.w l1Be5&$y =NY17~`4t2] ],|uyst=9~k7|ea͋hO7SNA໿?zj(y>}2K ?{̾:2#)f6x>yhpXvi /I4+y?|Ik>p6A:&@?O#ɂ<ޜGcPk_NA+>{@SЮ,eS^F~%^N~؝lيj+>ߘX&/]i rTŤ-MKhz_;z !i%(]wk,և,VW`=[x}WR|l%p}]e1n0#BWgR1p,׳{-F!D(]C |E"qAAAAUI!lO"X9ƯL!$)' EFU a^$Nq=YGb(ubDaj`B$&yb#x%$YםpshWS󭨿aM 8[g}lXۿp,m 9r'dioK\(ov? 7o6x7kUkYUfwzYћBvkɫ-yi7jPC?#0%* .Wr??==gG3N^.Mm0?f6}|NJ}>Pag^ߞ /bg /۟eRa 7*jv"${0ߢ(KfX h8W%]<0`>3s7u>|tjf~kLcgƟz:_̞so[U]o'TʼnWGvQօL7VqtN{]ot`M0)UdIZPv5^؍C^x}=Xeu/v )ȞxV|9sG n7IA-"܂`LߏZsaUT@RwÃb Eq.AbV !h\,Ys*e$c_A2T?_j Ab,[xE*}gHjHQP#kؐ9Þs5LNV4%8ÏOI;ɮ:3f\[2*OH\MNl2>BZMHr"1ZrǴ: ʸ h墏is"5 QR`tNDA9.IY]Am\}@D܃jp/m^>3𘷮T4^+~D巟.:f#.)#Y]޼^^9*wΧCFpV݂oQw?;|RmA41eշb #"hP}I_iόU(Hy=58Z$  L,ؖ2yHRzHkjQqgqKx6 7!ZY>?Cw6=D󪢬M]GN{}# %w 2XTŴx G(%ŧbW =${`lo-hxK*]>`os&קGEۛ_Q.8NOnbrCkPvzD9&q[HZ/ å/T` Z0(6^==^m%SD} Q O\ܐfm4h|^ϛ/6bg|Lr9'ik6x=޼ISͻjޖ[ڡ:p#9uih.B P'1hC!(ioo!7&&U[/Xc ʹʁ<3,&IC,IfA l0X( 9PkJ19mRR$Պ;DZG',07e e#Bq)Z)"v{wE$w Rzkd7gԷ,bx;gvO lwmw_J퀧EșAX`x&hxuu"x۫y4u7nf4i2|G_ήNN'յ;~7Z4nϮ3ĀևfњNzrm՝rFxxi_ڰ wҷ'qkG v5CPMv(jKyW$'CwIUVhPd탄=/ {ˎ`@C0`D8}3WOvֈ(q / z}dy^Mpcv,2X"[}Ipy_>dT_#a ͸e^" "j]l|;>tQoGh 6/NgZ(w &lj}j*(r"8*$-LDDd^[[^qvu6023| jֹhT߼/P鐡ى;:14\4皡D~ug]ufAxȇt g뻜e Op<,G9gf ⬘I\]4Rit~)d^J /75+E ᆏ_Ȩgj7`ik)u*ҙ7f$SR '*WbJ3J҂tYmMm?~ҳg''=pPhe71!wzę2Buq uh͕hx9_($ 3ĨVbE0*BgUN5Yk[-K%A Wt..>R㏐cCa6t+џ08,=h[S{K'pX=Տev"h4 T٧3peQ\[1.x _Xd䱠=5@t k)F A+FkJ8Jbek @APk54K*u9SJt,hk a ]V̚Y9ƃ᪷rݱrаcm}t u;]ۙ PtbCKYy<<[ëI%VC$ L=XT2AUT7 |mƺE S2f_pW圣TKͭ14K$=Wњ rM D2j-S^HiV:uBS):[S@ߡ9;ўqqk/a[9[nIO紲a7Kg'``ZVLX -{ꃽ;G%|U-XhwYҝi;_Z9Ͽc/eۻo?Ͽ#O;=CcLHĩD¡RJNw <:|PY"F_Ƌ8Xץ/R`Z$UG0բPy@(jP P#ʅ+0&ZFgDb ENh"%e"Cx\.1gJNrɂ:'|',c Y8$HF2~\x'|s~%:5}xE6lPL5Rp&1R0@d4R~t ׹В#CߏO nF bP $=?%8Ǖ%"QTJIz 'gCc Q@&x?4F\M#FHʟиוhP0E>rqNc-eƐ_J_WރPt~gʫquU޼'{Ƽv5JЗ®T尋 k*7fĨR]ͼ (;.Oαz5BDq`QQċNRGG$oڗ R_-/]W/])(rպci@~@{-]q(aDq=re B(R>*ΰB3DzIJRxbGZ*=oxXa&VxiaV2vZp~c"."Y HLFQX$xaAX* ~)}҆)De._ $hB0(y*>UGEmdrd.soC)@sc}\A95ZZU$OA|M m{9d " /D 'OEbÍ VNVD";}Y ׸|8.:㡰{p3 }׮;D6Nf?=wwQ|H1d{`~ׯեC;=-pvíGVL|<xSAYmOtp:,5\-76^Se У@(M&י ?8XtgO&q2 od\ -VjNrR՜ldQx1u}8<툾ZK@\ȳU7_媦UUE9EBK _o^j;WoIS8Y7D8gazU^v6`0cY#a枸W?8Q]ȉS,f}'9ޢ8YxL˭֟79ZSKW4%~h쉾2B0a@LѬPR!~\곋IoXt0B(b#Pe{m ^aBkl@C6`D4te W`ę>Q֫w|\_tD4|Y1Z}26>t-{M~I!h fGG722o֌B m3|{mC tXl/W;m^ШI^_+*câ' $:xKNjaCP(Ʌ+Z:Eyjcܛu^8!2&.3w>r\\(CN %I{4Ŭ3xHTr&KeT!2W7*8y?-lJK.j(T+%(v1)CJ.ao^//e~wb3*!e,/;=,ΧC@ηw?Gxz9/}'k|́oΏQ ;ppfge$= ; V{r$$91.@z5`[Ym!DZn/5JsŰM|۸%p 7=Pk<ߤg2C uѬЎ0PɄBI-yCeU$IkWm GxZU8{6f _ nT`1v<"d@g;<6T\NMFѝU 0/L!jwy^m<-Kwy}qjӒq}~ZQ~Hzo~~s@МrQ>sJuOmb{V=rMdpktYQv/eyLT'Li>вZu~.zpvs;T?m ` ѻh/__5پuޒJ@$^eߦW$( Tk ~IƳ ~9:9T!Aw{vwY ~;[mt;~5ctJFeCFCAt&,4[ޕq#EBy~$1&0Xf4aniuf#hu*d"&5~yj'Z +Ahտ 1G][1޾}]|[|!yq7#͠)㒴>< }]b/*-֝ `K[k~(q{U6R+D-f+*Z]F Ѝso~z%6!8 |;ۑZ'jw1P弗ͦz';L]tf~.y r@ KRe~ Gu olBQHn47O<b3TmnH>$AApkKBm^ loŞ*,䁛hMR; h>ǻw1Jnӎޭ y&dSw>nv"v tBڜEhqc-wBnٔA&*JXK/%5gXY^kεg<Ixc T)ܘShwr|? #5|jnh 8u6֒k>+oUkl *B[*٩J *%;j\uG-TJM%څ`bӶ],'nxR  Q%/G)Og7LـhEG|&Pdʔ:xM!K).* E%N:}U4 rhjqyO^W7h\'VIta+VCŞЮ>`?lK'Sd' !DJ'R9!g \j;#.Z8bifeF})N58C*I,J0ϤF䎳~hzT*̖w BpDAqDR4aV[Xij0W1a kgBr }_ *_ TEV'A4,>\bH `I*I@ O91:D4Jx!Mf gH5"r9_ C:?%{\G}ڲ#'zr%#RŮ,eRjwGwtEۏ_)K, D@6RD2 3 f,I1H``*cb8'དQٝd^@kE+rV !vtb@)JYqzq/>[&~g盫Vw {-,\YeRqB6s*KǪՂkF= мuU!+DڀS]+UO[$4܄0W]E=U6ο7MU +FKBV QmzeC['}ܚVFAVn Z5Y`T[ BcJT |K^K6= ) -2AxT,)2!J.Oz٤M Tj9a:)3u8wjђRV(z5*{raawW ҕBO1\{ASp^+9 TrOL@L=IwW.JD!\j+`pJ G0 5ThX1sUd8>tC2wlv_; ݪZTiBΰ"14c\}"ub&diNQTˈC\v`{ܒT$%[w/6u'bl˓2H+mFjeQ:$ʴF3 ~gX$ÓA7IvC1L;6 WTSMP!!sNaSMPfeU'\! ޹ȎҊcQ8l} )8pKԊhCI1v O438kN$㙭p ȕ5HcƗta_n7 1rp~5$nW.ͯl[-ܸrݴQFyo)iW&z<Ƈj1CHAiͤzWqe/n^՛rnu+ &,K[iN歏Xɣ5{b/dw<0d 6`|)b+aߤdQGM.8aL7G[Uu;ƶX %5H{gtK9o$09/)QC,iy~W{wo Lp=rް@] M%]"筺Ihn}B.h~N6RA.nK[1isMޙq-r^?2!(X $mNG!ӟeZgW>ԫ AGA) XQ wQj[* `XwbE˔xF 0'3TZ--zȫ HGW!MDEN.$%gR RZto-g7>N)iNsx HVv]{;*ǯ!Vv6X{`7%np>s@y()嚣:t(sNjv]gsfl#(j8Y"$J;pu`Y1e= tgԝ'uda#\tgUO:*=]lk}"F:)8}AħS)ɍ6\'${LT=#slB #vfnZ7$T)D#G 4F1a8|% LIfXEXG_j1qWmrVCdaLlʪEx#^bq`8n.r٦.e&◽g33`=O)Q @YVA~CqXbQ ƷoKV& 9KO~q_uը퍃NH~F(( X EilS:I GR"It=1/vl[jˤwr߬7%R^w,vn8D@2Z->o7iC>3`uըx&0Id|ja.;w1϶T Ns(M 5;frsXeAg[2+/G%y3`2ӠՌz9]/|+ {qeѣOe/ < 4 ~r1I7vZ`629-ڴ+4^͓%1|y]ugk5}/^ߘgY$=)7xԏ޽!]xke7$zQ|~k99 ,}'=].֟~S2G;L'/o?wlEvh}B7kj.c%s-7I?>^~7QjN'yk rkb.Fm}p\h3i{`?zn|=kii-^uuOW98Os}dc>8:EwE {r]R#"L`=LdO?L(rtSd8X[ vZoKxYzvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000003644737115146720373017724 0ustar rootrootFeb 23 00:06:39 crc systemd[1]: Starting Kubernetes Kubelet... Feb 23 00:06:40 crc restorecon[4688]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 00:06:41 crc restorecon[4688]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 00:06:41 crc restorecon[4688]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 23 00:06:42 crc kubenswrapper[4953]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 00:06:42 crc kubenswrapper[4953]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 23 00:06:42 crc kubenswrapper[4953]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 00:06:42 crc kubenswrapper[4953]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 00:06:42 crc kubenswrapper[4953]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 23 00:06:42 crc kubenswrapper[4953]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.889152 4953 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900224 4953 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900261 4953 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900270 4953 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900279 4953 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900322 4953 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900331 4953 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900339 4953 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900349 4953 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900357 4953 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900366 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900374 4953 feature_gate.go:330] unrecognized feature gate: Example Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900382 4953 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900389 4953 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900397 4953 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900405 4953 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900413 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900421 4953 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900429 4953 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900440 4953 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900450 4953 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900460 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900468 4953 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900476 4953 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900484 4953 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900493 4953 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900509 4953 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900516 4953 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900524 4953 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900532 4953 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900540 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900548 4953 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900555 4953 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900563 4953 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900571 4953 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900579 4953 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900587 4953 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900594 4953 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900602 4953 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900611 4953 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900619 4953 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900626 4953 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900634 4953 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900642 4953 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900653 4953 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900663 4953 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900671 4953 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900678 4953 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900686 4953 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900694 4953 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900702 4953 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900710 4953 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900717 4953 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900724 4953 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900735 4953 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900745 4953 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900753 4953 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900761 4953 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900769 4953 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900776 4953 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900785 4953 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900793 4953 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900800 4953 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900812 4953 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900822 4953 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900831 4953 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900839 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900847 4953 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900857 4953 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900865 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900874 4953 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.900882 4953 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901064 4953 flags.go:64] FLAG: --address="0.0.0.0" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901089 4953 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901109 4953 flags.go:64] FLAG: --anonymous-auth="true" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901123 4953 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901138 4953 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901150 4953 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901162 4953 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901172 4953 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901181 4953 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901191 4953 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901201 4953 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901210 4953 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901220 4953 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901229 4953 flags.go:64] FLAG: --cgroup-root="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901238 4953 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901247 4953 flags.go:64] FLAG: --client-ca-file="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901256 4953 flags.go:64] FLAG: --cloud-config="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901265 4953 flags.go:64] FLAG: --cloud-provider="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901273 4953 flags.go:64] FLAG: --cluster-dns="[]" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901313 4953 flags.go:64] FLAG: --cluster-domain="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901323 4953 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901333 4953 flags.go:64] FLAG: --config-dir="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901342 4953 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901351 4953 flags.go:64] FLAG: --container-log-max-files="5" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901362 4953 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901371 4953 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901380 4953 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901389 4953 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901400 4953 flags.go:64] FLAG: --contention-profiling="false" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901411 4953 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901426 4953 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901439 4953 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901451 4953 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901467 4953 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901479 4953 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901491 4953 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901504 4953 flags.go:64] FLAG: --enable-load-reader="false" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901516 4953 flags.go:64] FLAG: --enable-server="true" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901529 4953 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901545 4953 flags.go:64] FLAG: --event-burst="100" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901558 4953 flags.go:64] FLAG: --event-qps="50" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901572 4953 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901585 4953 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901598 4953 flags.go:64] FLAG: --eviction-hard="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901614 4953 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901626 4953 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901638 4953 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901657 4953 flags.go:64] FLAG: --eviction-soft="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901672 4953 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901685 4953 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901699 4953 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901712 4953 flags.go:64] FLAG: --experimental-mounter-path="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901726 4953 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901738 4953 flags.go:64] FLAG: --fail-swap-on="true" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901750 4953 flags.go:64] FLAG: --feature-gates="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901766 4953 flags.go:64] FLAG: --file-check-frequency="20s" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901779 4953 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901793 4953 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901805 4953 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901819 4953 flags.go:64] FLAG: --healthz-port="10248" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901830 4953 flags.go:64] FLAG: --help="false" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901842 4953 flags.go:64] FLAG: --hostname-override="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901855 4953 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901904 4953 flags.go:64] FLAG: --http-check-frequency="20s" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901917 4953 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901928 4953 flags.go:64] FLAG: --image-credential-provider-config="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901938 4953 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901948 4953 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901956 4953 flags.go:64] FLAG: --image-service-endpoint="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901965 4953 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901974 4953 flags.go:64] FLAG: --kube-api-burst="100" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901982 4953 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.901992 4953 flags.go:64] FLAG: --kube-api-qps="50" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902001 4953 flags.go:64] FLAG: --kube-reserved="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902010 4953 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902019 4953 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902028 4953 flags.go:64] FLAG: --kubelet-cgroups="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902037 4953 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902046 4953 flags.go:64] FLAG: --lock-file="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902068 4953 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902078 4953 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902087 4953 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902107 4953 flags.go:64] FLAG: --log-json-split-stream="false" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902118 4953 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902127 4953 flags.go:64] FLAG: --log-text-split-stream="false" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902136 4953 flags.go:64] FLAG: --logging-format="text" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902145 4953 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902155 4953 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902164 4953 flags.go:64] FLAG: --manifest-url="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902173 4953 flags.go:64] FLAG: --manifest-url-header="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902185 4953 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902194 4953 flags.go:64] FLAG: --max-open-files="1000000" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902205 4953 flags.go:64] FLAG: --max-pods="110" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902214 4953 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902224 4953 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902233 4953 flags.go:64] FLAG: --memory-manager-policy="None" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902242 4953 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902251 4953 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902260 4953 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902269 4953 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902328 4953 flags.go:64] FLAG: --node-status-max-images="50" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902348 4953 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902361 4953 flags.go:64] FLAG: --oom-score-adj="-999" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902376 4953 flags.go:64] FLAG: --pod-cidr="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902387 4953 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902403 4953 flags.go:64] FLAG: --pod-manifest-path="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902412 4953 flags.go:64] FLAG: --pod-max-pids="-1" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902421 4953 flags.go:64] FLAG: --pods-per-core="0" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902430 4953 flags.go:64] FLAG: --port="10250" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902441 4953 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902450 4953 flags.go:64] FLAG: --provider-id="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902459 4953 flags.go:64] FLAG: --qos-reserved="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902469 4953 flags.go:64] FLAG: --read-only-port="10255" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902478 4953 flags.go:64] FLAG: --register-node="true" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902487 4953 flags.go:64] FLAG: --register-schedulable="true" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902496 4953 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902526 4953 flags.go:64] FLAG: --registry-burst="10" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902535 4953 flags.go:64] FLAG: --registry-qps="5" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902545 4953 flags.go:64] FLAG: --reserved-cpus="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902556 4953 flags.go:64] FLAG: --reserved-memory="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902567 4953 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902577 4953 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902585 4953 flags.go:64] FLAG: --rotate-certificates="false" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902594 4953 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902603 4953 flags.go:64] FLAG: --runonce="false" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902612 4953 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902622 4953 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902631 4953 flags.go:64] FLAG: --seccomp-default="false" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902640 4953 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902650 4953 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902660 4953 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902669 4953 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902678 4953 flags.go:64] FLAG: --storage-driver-password="root" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902687 4953 flags.go:64] FLAG: --storage-driver-secure="false" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902696 4953 flags.go:64] FLAG: --storage-driver-table="stats" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902705 4953 flags.go:64] FLAG: --storage-driver-user="root" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902714 4953 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902723 4953 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902732 4953 flags.go:64] FLAG: --system-cgroups="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902741 4953 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902756 4953 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902765 4953 flags.go:64] FLAG: --tls-cert-file="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902774 4953 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902786 4953 flags.go:64] FLAG: --tls-min-version="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902795 4953 flags.go:64] FLAG: --tls-private-key-file="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902804 4953 flags.go:64] FLAG: --topology-manager-policy="none" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902854 4953 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902865 4953 flags.go:64] FLAG: --topology-manager-scope="container" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902875 4953 flags.go:64] FLAG: --v="2" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902887 4953 flags.go:64] FLAG: --version="false" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902899 4953 flags.go:64] FLAG: --vmodule="" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902910 4953 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.902920 4953 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903139 4953 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903150 4953 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903163 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903172 4953 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903180 4953 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903188 4953 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903198 4953 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903206 4953 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903217 4953 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903226 4953 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903234 4953 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903242 4953 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903250 4953 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903258 4953 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903265 4953 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903273 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903281 4953 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903323 4953 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903335 4953 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903345 4953 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903355 4953 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903364 4953 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903374 4953 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903383 4953 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903392 4953 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903400 4953 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903408 4953 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903415 4953 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903423 4953 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903434 4953 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903444 4953 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903452 4953 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903460 4953 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903469 4953 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903479 4953 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903487 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903496 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903504 4953 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903522 4953 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903532 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903540 4953 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903548 4953 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903556 4953 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903563 4953 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903573 4953 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903580 4953 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903596 4953 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903603 4953 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903611 4953 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903618 4953 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903627 4953 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903634 4953 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903642 4953 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903649 4953 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903657 4953 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903665 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903673 4953 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903681 4953 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903688 4953 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903696 4953 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903704 4953 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903712 4953 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903720 4953 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903728 4953 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903735 4953 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903743 4953 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903751 4953 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903758 4953 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903766 4953 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903774 4953 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.903785 4953 feature_gate.go:330] unrecognized feature gate: Example Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.903809 4953 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.918849 4953 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.918893 4953 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919051 4953 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919069 4953 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919081 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919095 4953 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919112 4953 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919124 4953 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919138 4953 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919152 4953 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919165 4953 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919175 4953 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919186 4953 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919197 4953 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919208 4953 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919220 4953 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919231 4953 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919241 4953 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919251 4953 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919261 4953 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919271 4953 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919280 4953 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919327 4953 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919338 4953 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919347 4953 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919358 4953 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919367 4953 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919377 4953 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919387 4953 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919396 4953 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919408 4953 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919421 4953 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919430 4953 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919440 4953 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919450 4953 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919461 4953 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919471 4953 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919481 4953 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919491 4953 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919501 4953 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919511 4953 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919524 4953 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919534 4953 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919544 4953 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919554 4953 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919565 4953 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919575 4953 feature_gate.go:330] unrecognized feature gate: Example Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919585 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919595 4953 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919610 4953 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919623 4953 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919636 4953 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919647 4953 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919658 4953 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919668 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919679 4953 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919690 4953 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919700 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919711 4953 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919722 4953 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919731 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919742 4953 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919752 4953 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919762 4953 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919776 4953 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919789 4953 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919801 4953 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919813 4953 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919823 4953 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919833 4953 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919843 4953 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919853 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.919864 4953 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.919880 4953 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920200 4953 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920219 4953 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920231 4953 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920245 4953 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920260 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920272 4953 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920282 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920329 4953 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920340 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920352 4953 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920363 4953 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920374 4953 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920384 4953 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920394 4953 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920404 4953 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920414 4953 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920427 4953 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920440 4953 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920451 4953 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920462 4953 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920472 4953 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920481 4953 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920492 4953 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920502 4953 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920512 4953 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920522 4953 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920532 4953 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920541 4953 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920552 4953 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920564 4953 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920577 4953 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920590 4953 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920602 4953 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920613 4953 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920624 4953 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920637 4953 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920649 4953 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920660 4953 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920673 4953 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920685 4953 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920696 4953 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920706 4953 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920716 4953 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920726 4953 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920737 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920746 4953 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920756 4953 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920766 4953 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920776 4953 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920787 4953 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920798 4953 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920808 4953 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920818 4953 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920831 4953 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920843 4953 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920854 4953 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920865 4953 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920875 4953 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920885 4953 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920895 4953 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920906 4953 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920916 4953 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920926 4953 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920936 4953 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920947 4953 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920959 4953 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920969 4953 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920980 4953 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.920990 4953 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.921001 4953 feature_gate.go:330] unrecognized feature gate: Example Feb 23 00:06:42 crc kubenswrapper[4953]: W0223 00:06:42.921011 4953 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.921028 4953 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.921350 4953 server.go:940] "Client rotation is on, will bootstrap in background" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.937363 4953 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.937537 4953 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.939902 4953 server.go:997] "Starting client certificate rotation" Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.939948 4953 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.942243 4953 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-19 09:24:05.25077249 +0000 UTC Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.942408 4953 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 00:06:42 crc kubenswrapper[4953]: I0223 00:06:42.993014 4953 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.004145 4953 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 00:06:43 crc kubenswrapper[4953]: E0223 00:06:43.005798 4953 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.038408 4953 log.go:25] "Validated CRI v1 runtime API" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.122615 4953 log.go:25] "Validated CRI v1 image API" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.126143 4953 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.132524 4953 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-23-00-02-07-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.132566 4953 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.161959 4953 manager.go:217] Machine: {Timestamp:2026-02-23 00:06:43.158250444 +0000 UTC m=+1.092092360 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:eabb6e58-7c3d-4135-8dce-11f0c13836c2 BootID:533c54a2-4b2a-486b-84ff-79539bb86284 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ad:02:8f Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ad:02:8f Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d6:da:e3 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:2a:84:19 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:8d:ea:c1 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ea:02:ac Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0a:b1:6a:26:01:3b Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ce:7e:15:01:1d:e0 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.162358 4953 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.162524 4953 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.162912 4953 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.163238 4953 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.163373 4953 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.163692 4953 topology_manager.go:138] "Creating topology manager with none policy" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.163710 4953 container_manager_linux.go:303] "Creating device plugin manager" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.164457 4953 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.164506 4953 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.164744 4953 state_mem.go:36] "Initialized new in-memory state store" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.164869 4953 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.199733 4953 kubelet.go:418] "Attempting to sync node with API server" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.199775 4953 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.199800 4953 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.199823 4953 kubelet.go:324] "Adding apiserver pod source" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.199845 4953 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 23 00:06:43 crc kubenswrapper[4953]: W0223 00:06:43.205458 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Feb 23 00:06:43 crc kubenswrapper[4953]: E0223 00:06:43.205535 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Feb 23 00:06:43 crc kubenswrapper[4953]: W0223 00:06:43.205481 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Feb 23 00:06:43 crc kubenswrapper[4953]: E0223 00:06:43.205603 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.206318 4953 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.208178 4953 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.212758 4953 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.216061 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.216104 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.216119 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.216134 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.216157 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.216170 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.216184 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.216205 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.216221 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.216235 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.216253 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.216266 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.220035 4953 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.220725 4953 server.go:1280] "Started kubelet" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.220925 4953 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.221599 4953 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 23 00:06:43 crc systemd[1]: Started Kubernetes Kubelet. Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.228238 4953 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.228359 4953 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.235473 4953 server.go:460] "Adding debug handlers to kubelet server" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.235742 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.235790 4953 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.235852 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:16:35.629448663 +0000 UTC Feb 23 00:06:43 crc kubenswrapper[4953]: E0223 00:06:43.236187 4953 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.236350 4953 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.236375 4953 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.236467 4953 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.238552 4953 factory.go:55] Registering systemd factory Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.238581 4953 factory.go:221] Registration of the systemd container factory successfully Feb 23 00:06:43 crc kubenswrapper[4953]: W0223 00:06:43.242276 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Feb 23 00:06:43 crc kubenswrapper[4953]: E0223 00:06:43.242399 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Feb 23 00:06:43 crc kubenswrapper[4953]: E0223 00:06:43.242879 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="200ms" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.243863 4953 factory.go:153] Registering CRI-O factory Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.243909 4953 factory.go:221] Registration of the crio container factory successfully Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.244022 4953 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.244068 4953 factory.go:103] Registering Raw factory Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.244100 4953 manager.go:1196] Started watching for new ooms in manager Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.245252 4953 manager.go:319] Starting recovery of all containers Feb 23 00:06:43 crc kubenswrapper[4953]: E0223 00:06:43.247269 4953 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1896b771e5285c93 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 00:06:43.220683923 +0000 UTC m=+1.154525809,LastTimestamp:2026-02-23 00:06:43.220683923 +0000 UTC m=+1.154525809,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253088 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253165 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253188 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253209 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253228 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253248 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253268 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253313 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253336 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253354 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253380 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253400 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253420 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253444 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253462 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253480 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253500 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253520 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253540 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253562 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253617 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253637 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253658 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253711 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253734 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253754 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253779 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253800 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253820 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253840 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.253981 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254004 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254022 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254040 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254060 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254077 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254095 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254121 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254152 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254179 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254205 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254225 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254251 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254279 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254366 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254396 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254422 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254449 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254470 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254496 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254522 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254573 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254643 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254677 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254708 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254735 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254761 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254802 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254827 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254853 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254878 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254902 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254927 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254952 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.254975 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255000 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255028 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255051 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255074 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255099 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255125 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255150 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255175 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255201 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255228 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255256 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255281 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255350 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255376 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255402 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255426 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255452 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255480 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255507 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255531 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255557 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255584 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255607 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255631 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255656 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255680 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255703 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255725 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255750 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255776 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255799 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255824 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255849 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255876 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255899 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255923 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255949 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255975 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.255999 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256036 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256065 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256095 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256121 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256148 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256173 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256200 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256226 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256251 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256278 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256345 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256372 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256397 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256421 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256444 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256467 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256493 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256516 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256536 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256553 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256571 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256589 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256606 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256629 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256645 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256667 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256692 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256716 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256742 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256766 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256788 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256813 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256836 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256863 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256885 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256906 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256926 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256951 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.256976 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257001 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257025 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257049 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257072 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257095 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257118 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257141 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257168 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257193 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257219 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257244 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257270 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257328 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257356 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257381 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257403 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257427 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257450 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257475 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257495 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257518 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257543 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257566 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257592 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257614 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257635 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257662 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257725 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.257753 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.262419 4953 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.262477 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.262516 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.262544 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.262572 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.262599 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.262628 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.262661 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.262690 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.262718 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.262742 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.262769 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.262797 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.262825 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.262853 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.262879 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.262934 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.262960 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.262990 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.263017 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.263043 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.263070 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.263097 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.263123 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.263150 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.263175 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.263205 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.263234 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.263263 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.263330 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.263356 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.263381 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.263405 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.263429 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.263453 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.263475 4953 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.263492 4953 reconstruct.go:97] "Volume reconstruction finished" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.263505 4953 reconciler.go:26] "Reconciler: start to sync state" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.278350 4953 manager.go:324] Recovery completed Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.287319 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.289043 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.289112 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.289132 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.290176 4953 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.290205 4953 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.290237 4953 state_mem.go:36] "Initialized new in-memory state store" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.315394 4953 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.318807 4953 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.324994 4953 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.325080 4953 kubelet.go:2335] "Starting kubelet main sync loop" Feb 23 00:06:43 crc kubenswrapper[4953]: E0223 00:06:43.325154 4953 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 23 00:06:43 crc kubenswrapper[4953]: W0223 00:06:43.326971 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Feb 23 00:06:43 crc kubenswrapper[4953]: E0223 00:06:43.327091 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Feb 23 00:06:43 crc kubenswrapper[4953]: E0223 00:06:43.337431 4953 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.350525 4953 policy_none.go:49] "None policy: Start" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.351821 4953 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.351859 4953 state_mem.go:35] "Initializing new in-memory state store" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.412455 4953 manager.go:334] "Starting Device Plugin manager" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.412505 4953 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.412521 4953 server.go:79] "Starting device plugin registration server" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.413068 4953 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.413087 4953 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.413425 4953 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.413546 4953 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.413555 4953 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 23 00:06:43 crc kubenswrapper[4953]: E0223 00:06:43.420441 4953 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.426915 4953 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.427014 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.428672 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.428703 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.428714 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.428835 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.429745 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.429814 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.429833 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.434226 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.434315 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.434335 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.434559 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.434645 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.435344 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.435387 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.435400 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.435580 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.436004 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.436042 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.436379 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.436400 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.436411 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.436435 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.436451 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.436459 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.436654 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.436672 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.436682 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.436790 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.436992 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.437035 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.437353 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.437388 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.437393 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.437407 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.437412 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.437434 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.437607 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.437635 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.438079 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.438111 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.438123 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.438206 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.438225 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.438236 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:43 crc kubenswrapper[4953]: E0223 00:06:43.443553 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="400ms" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.465068 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.465111 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.465130 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.465194 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.465231 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.465257 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.465279 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.465330 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.465352 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.465371 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.465393 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.465415 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.465450 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.465479 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.465499 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.513763 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.515074 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.515111 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.515124 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.515152 4953 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 00:06:43 crc kubenswrapper[4953]: E0223 00:06:43.518369 4953 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.566570 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.566675 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.566728 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.566770 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.566814 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.566850 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.566884 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.566922 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.566916 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.566958 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.566972 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.566994 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.567045 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.567078 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.567096 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.567118 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.567135 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.567171 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.567183 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.567206 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.567215 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.567246 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.567266 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.567279 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.567355 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.567346 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.567395 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.567171 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.567248 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.567548 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.568372 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: W0223 00:06:43.656715 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-7c5fdd27f2fb8ceae07855b89bbe94e50ea2bd9faf3261558675b04533c69eea WatchSource:0}: Error finding container 7c5fdd27f2fb8ceae07855b89bbe94e50ea2bd9faf3261558675b04533c69eea: Status 404 returned error can't find the container with id 7c5fdd27f2fb8ceae07855b89bbe94e50ea2bd9faf3261558675b04533c69eea Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.718914 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.721620 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.721665 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.721677 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.721708 4953 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 00:06:43 crc kubenswrapper[4953]: E0223 00:06:43.722149 4953 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.784405 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: W0223 00:06:43.801836 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7cfff1bfd10123b61e55d29ed6cfcdf742830a21e8f6ed7598bf849b7b625157 WatchSource:0}: Error finding container 7cfff1bfd10123b61e55d29ed6cfcdf742830a21e8f6ed7598bf849b7b625157: Status 404 returned error can't find the container with id 7cfff1bfd10123b61e55d29ed6cfcdf742830a21e8f6ed7598bf849b7b625157 Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.806646 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.823507 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: W0223 00:06:43.833236 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-86d8fd9cec43db36231b08f4c779d873426ac4ecfd07531cb3d90a688a9c5cbe WatchSource:0}: Error finding container 86d8fd9cec43db36231b08f4c779d873426ac4ecfd07531cb3d90a688a9c5cbe: Status 404 returned error can't find the container with id 86d8fd9cec43db36231b08f4c779d873426ac4ecfd07531cb3d90a688a9c5cbe Feb 23 00:06:43 crc kubenswrapper[4953]: E0223 00:06:43.844105 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="800ms" Feb 23 00:06:43 crc kubenswrapper[4953]: W0223 00:06:43.844126 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-f36b50d233e3d8693c9f012ada3625a5212fd8041b5e54bb9459ed4bff40d276 WatchSource:0}: Error finding container f36b50d233e3d8693c9f012ada3625a5212fd8041b5e54bb9459ed4bff40d276: Status 404 returned error can't find the container with id f36b50d233e3d8693c9f012ada3625a5212fd8041b5e54bb9459ed4bff40d276 Feb 23 00:06:43 crc kubenswrapper[4953]: I0223 00:06:43.854073 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:06:43 crc kubenswrapper[4953]: W0223 00:06:43.868914 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-3d1ccc1a487809f716c97bf144d2d7561d67e1d21af8280fd83319c08d634bff WatchSource:0}: Error finding container 3d1ccc1a487809f716c97bf144d2d7561d67e1d21af8280fd83319c08d634bff: Status 404 returned error can't find the container with id 3d1ccc1a487809f716c97bf144d2d7561d67e1d21af8280fd83319c08d634bff Feb 23 00:06:44 crc kubenswrapper[4953]: W0223 00:06:44.080393 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Feb 23 00:06:44 crc kubenswrapper[4953]: E0223 00:06:44.080470 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Feb 23 00:06:44 crc kubenswrapper[4953]: W0223 00:06:44.098360 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Feb 23 00:06:44 crc kubenswrapper[4953]: E0223 00:06:44.098461 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Feb 23 00:06:44 crc kubenswrapper[4953]: I0223 00:06:44.122841 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:44 crc kubenswrapper[4953]: I0223 00:06:44.123847 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:44 crc kubenswrapper[4953]: I0223 00:06:44.123895 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:44 crc kubenswrapper[4953]: I0223 00:06:44.123907 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:44 crc kubenswrapper[4953]: I0223 00:06:44.123935 4953 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 00:06:44 crc kubenswrapper[4953]: E0223 00:06:44.124205 4953 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Feb 23 00:06:44 crc kubenswrapper[4953]: W0223 00:06:44.186856 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Feb 23 00:06:44 crc kubenswrapper[4953]: E0223 00:06:44.186956 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Feb 23 00:06:44 crc kubenswrapper[4953]: I0223 00:06:44.229318 4953 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Feb 23 00:06:44 crc kubenswrapper[4953]: I0223 00:06:44.236374 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 02:17:35.92882439 +0000 UTC Feb 23 00:06:44 crc kubenswrapper[4953]: I0223 00:06:44.332481 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f36b50d233e3d8693c9f012ada3625a5212fd8041b5e54bb9459ed4bff40d276"} Feb 23 00:06:44 crc kubenswrapper[4953]: I0223 00:06:44.333780 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"86d8fd9cec43db36231b08f4c779d873426ac4ecfd07531cb3d90a688a9c5cbe"} Feb 23 00:06:44 crc kubenswrapper[4953]: I0223 00:06:44.334574 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7cfff1bfd10123b61e55d29ed6cfcdf742830a21e8f6ed7598bf849b7b625157"} Feb 23 00:06:44 crc kubenswrapper[4953]: I0223 00:06:44.335429 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7c5fdd27f2fb8ceae07855b89bbe94e50ea2bd9faf3261558675b04533c69eea"} Feb 23 00:06:44 crc kubenswrapper[4953]: I0223 00:06:44.336138 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3d1ccc1a487809f716c97bf144d2d7561d67e1d21af8280fd83319c08d634bff"} Feb 23 00:06:44 crc kubenswrapper[4953]: W0223 00:06:44.422148 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Feb 23 00:06:44 crc kubenswrapper[4953]: E0223 00:06:44.422252 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Feb 23 00:06:44 crc kubenswrapper[4953]: E0223 00:06:44.645885 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="1.6s" Feb 23 00:06:44 crc kubenswrapper[4953]: I0223 00:06:44.924938 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:44 crc kubenswrapper[4953]: I0223 00:06:44.926376 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:44 crc kubenswrapper[4953]: I0223 00:06:44.926437 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:44 crc kubenswrapper[4953]: I0223 00:06:44.926455 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:44 crc kubenswrapper[4953]: I0223 00:06:44.926493 4953 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 00:06:44 crc kubenswrapper[4953]: E0223 00:06:44.926991 4953 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.029142 4953 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 00:06:45 crc kubenswrapper[4953]: E0223 00:06:45.030489 4953 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.229227 4953 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.237875 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 07:18:26.460639553 +0000 UTC Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.342449 4953 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43" exitCode=0 Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.342555 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.342558 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43"} Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.343831 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.343881 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.343900 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.346043 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba"} Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.346078 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.347031 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.347079 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.347097 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.348215 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929"} Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.350973 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5"} Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.351034 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.352092 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.352164 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.352210 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.352994 4953 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="68c5b13a3edb243a70eb006ed2f28ac23bf2409bd0dfa263c98f5b2b6d3cb2d4" exitCode=0 Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.353047 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"68c5b13a3edb243a70eb006ed2f28ac23bf2409bd0dfa263c98f5b2b6d3cb2d4"} Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.353143 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.353901 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.354258 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.354310 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.354325 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.355143 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.355175 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:45 crc kubenswrapper[4953]: I0223 00:06:45.355186 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:46 crc kubenswrapper[4953]: W0223 00:06:46.124373 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Feb 23 00:06:46 crc kubenswrapper[4953]: E0223 00:06:46.124771 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.229569 4953 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.238088 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 02:43:27.966925744 +0000 UTC Feb 23 00:06:46 crc kubenswrapper[4953]: E0223 00:06:46.246828 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="3.2s" Feb 23 00:06:46 crc kubenswrapper[4953]: W0223 00:06:46.284111 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Feb 23 00:06:46 crc kubenswrapper[4953]: E0223 00:06:46.284165 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.359188 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570"} Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.359245 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13"} Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.359260 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e"} Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.362620 4953 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5" exitCode=0 Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.362686 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5"} Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.362707 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0"} Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.362723 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112"} Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.362735 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861"} Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.365453 4953 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2d7e9132b510df780125450753030cf5bd92d93407e40be89c8c2d906ab97fa0" exitCode=0 Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.365508 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2d7e9132b510df780125450753030cf5bd92d93407e40be89c8c2d906ab97fa0"} Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.365680 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.367641 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.367668 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.367680 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.367644 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a7e115b920096378945da1bc7b1422541774ffb1329e6ea61f195b4fd06c347a"} Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.367722 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.369183 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.369246 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.369264 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.369659 4953 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba" exitCode=0 Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.369768 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba"} Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.370027 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.371530 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.371558 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.371569 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.527233 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.528839 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.528900 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.528915 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:46 crc kubenswrapper[4953]: I0223 00:06:46.528957 4953 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 00:06:46 crc kubenswrapper[4953]: E0223 00:06:46.529628 4953 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Feb 23 00:06:46 crc kubenswrapper[4953]: E0223 00:06:46.889489 4953 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1896b771e5285c93 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 00:06:43.220683923 +0000 UTC m=+1.154525809,LastTimestamp:2026-02-23 00:06:43.220683923 +0000 UTC m=+1.154525809,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 00:06:47 crc kubenswrapper[4953]: W0223 00:06:47.197110 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Feb 23 00:06:47 crc kubenswrapper[4953]: E0223 00:06:47.197202 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Feb 23 00:06:47 crc kubenswrapper[4953]: W0223 00:06:47.203402 4953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Feb 23 00:06:47 crc kubenswrapper[4953]: E0223 00:06:47.203457 4953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.229125 4953 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.238734 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 14:11:10.215887369 +0000 UTC Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.375155 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b17f060992b08016c65d1faca4eedf4c8a2111b6540143d00dd192d736f0197e"} Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.375256 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2c6056e78ae465353d500663bf00838ba97b6dcd8ca6e286b5eb2427dbfbfc58"} Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.375280 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"27a2b94bf3e39dc5cf412426c8c8e30a9e085d5a10bcf84ea64d01a6371c86ae"} Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.375191 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.376185 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.376228 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.376239 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.378962 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"26a421e3b26879d1018eab474ba5f87e79c01c130a3f061bf70d05ec7e097d67"} Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.379013 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1"} Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.379176 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.380444 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.380465 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.380476 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.380980 4953 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="842c0841144a302d6c81d8e2eafdd82ac681c99dc38f8e6cbfaa0ca285a3130b" exitCode=0 Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.381060 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"842c0841144a302d6c81d8e2eafdd82ac681c99dc38f8e6cbfaa0ca285a3130b"} Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.381152 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.381220 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.381155 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.382116 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.382150 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.382161 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.382602 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.382634 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.382646 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.382698 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.382712 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.382721 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.788448 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:06:47 crc kubenswrapper[4953]: I0223 00:06:47.829109 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.214433 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.214803 4953 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.214905 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.229123 4953 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.239449 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 19:28:42.427053791 +0000 UTC Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.264781 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.390275 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4901ddb43d47d0aaf6710a1e5851c5cc37fb5fabc8a7fee9572d8caac0290c25"} Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.390343 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2b1f42f64ba93a8c7b966b0782ae4efadbb27d201398db0b793f905fe36fabaa"} Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.390359 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1f9406427678d922070fe6a4de93066e7e32b397bf4a5a0b9b0c8ebe69366318"} Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.390360 4953 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.390406 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.390434 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.390481 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.395648 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.395665 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.395657 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.395695 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.395710 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.395719 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.395735 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.395684 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.395803 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:48 crc kubenswrapper[4953]: I0223 00:06:48.770730 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.210133 4953 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.240503 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 12:31:04.120742161 +0000 UTC Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.395418 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.397177 4953 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="26a421e3b26879d1018eab474ba5f87e79c01c130a3f061bf70d05ec7e097d67" exitCode=255 Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.397291 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"26a421e3b26879d1018eab474ba5f87e79c01c130a3f061bf70d05ec7e097d67"} Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.397397 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.398555 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.398612 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.398630 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.399403 4953 scope.go:117] "RemoveContainer" containerID="26a421e3b26879d1018eab474ba5f87e79c01c130a3f061bf70d05ec7e097d67" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.402022 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8b4adae5f543d5649baf638c46612796731d88b7bb40df970b6b69695e7058c2"} Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.402077 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"aa765fff794946e4c1f654ecd29a1138b5144718db12ffdafe32ceb8515b8bdc"} Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.402043 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.402133 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.402144 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.403018 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.403046 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.403054 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.403423 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.403483 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.403507 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.403814 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.403837 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.403845 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.730314 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.731896 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.732000 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.732026 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:49 crc kubenswrapper[4953]: I0223 00:06:49.732090 4953 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 00:06:50 crc kubenswrapper[4953]: I0223 00:06:50.240992 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 06:16:45.675504653 +0000 UTC Feb 23 00:06:50 crc kubenswrapper[4953]: I0223 00:06:50.260259 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 23 00:06:50 crc kubenswrapper[4953]: I0223 00:06:50.406420 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 23 00:06:50 crc kubenswrapper[4953]: I0223 00:06:50.408127 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b"} Feb 23 00:06:50 crc kubenswrapper[4953]: I0223 00:06:50.408219 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:50 crc kubenswrapper[4953]: I0223 00:06:50.408321 4953 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 00:06:50 crc kubenswrapper[4953]: I0223 00:06:50.408385 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:50 crc kubenswrapper[4953]: I0223 00:06:50.408977 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:50 crc kubenswrapper[4953]: I0223 00:06:50.409336 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:50 crc kubenswrapper[4953]: I0223 00:06:50.409377 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:50 crc kubenswrapper[4953]: I0223 00:06:50.409395 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:50 crc kubenswrapper[4953]: I0223 00:06:50.409455 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:50 crc kubenswrapper[4953]: I0223 00:06:50.409494 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:50 crc kubenswrapper[4953]: I0223 00:06:50.409512 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:50 crc kubenswrapper[4953]: I0223 00:06:50.410507 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:50 crc kubenswrapper[4953]: I0223 00:06:50.410561 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:50 crc kubenswrapper[4953]: I0223 00:06:50.410578 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:50 crc kubenswrapper[4953]: I0223 00:06:50.644791 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 23 00:06:51 crc kubenswrapper[4953]: I0223 00:06:51.241342 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 05:44:32.505956609 +0000 UTC Feb 23 00:06:51 crc kubenswrapper[4953]: I0223 00:06:51.412038 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:51 crc kubenswrapper[4953]: I0223 00:06:51.412004 4953 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 00:06:51 crc kubenswrapper[4953]: I0223 00:06:51.412411 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:51 crc kubenswrapper[4953]: I0223 00:06:51.413810 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:51 crc kubenswrapper[4953]: I0223 00:06:51.413882 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:51 crc kubenswrapper[4953]: I0223 00:06:51.413906 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:51 crc kubenswrapper[4953]: I0223 00:06:51.416252 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:51 crc kubenswrapper[4953]: I0223 00:06:51.416353 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:51 crc kubenswrapper[4953]: I0223 00:06:51.416426 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:51 crc kubenswrapper[4953]: I0223 00:06:51.716005 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:06:52 crc kubenswrapper[4953]: I0223 00:06:52.007647 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:06:52 crc kubenswrapper[4953]: I0223 00:06:52.241451 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 04:32:54.936321634 +0000 UTC Feb 23 00:06:52 crc kubenswrapper[4953]: I0223 00:06:52.416671 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:52 crc kubenswrapper[4953]: I0223 00:06:52.417222 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:52 crc kubenswrapper[4953]: I0223 00:06:52.422042 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:52 crc kubenswrapper[4953]: I0223 00:06:52.422100 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:52 crc kubenswrapper[4953]: I0223 00:06:52.422113 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:52 crc kubenswrapper[4953]: I0223 00:06:52.422622 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:52 crc kubenswrapper[4953]: I0223 00:06:52.422651 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:52 crc kubenswrapper[4953]: I0223 00:06:52.422665 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:52 crc kubenswrapper[4953]: I0223 00:06:52.925182 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:06:52 crc kubenswrapper[4953]: I0223 00:06:52.925396 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:52 crc kubenswrapper[4953]: I0223 00:06:52.926409 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:52 crc kubenswrapper[4953]: I0223 00:06:52.926447 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:52 crc kubenswrapper[4953]: I0223 00:06:52.926458 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:53 crc kubenswrapper[4953]: I0223 00:06:53.242356 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 00:27:30.807574946 +0000 UTC Feb 23 00:06:53 crc kubenswrapper[4953]: I0223 00:06:53.417648 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:53 crc kubenswrapper[4953]: I0223 00:06:53.418985 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:53 crc kubenswrapper[4953]: I0223 00:06:53.419108 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:53 crc kubenswrapper[4953]: I0223 00:06:53.419121 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:53 crc kubenswrapper[4953]: E0223 00:06:53.420684 4953 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 00:06:53 crc kubenswrapper[4953]: I0223 00:06:53.949944 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:06:53 crc kubenswrapper[4953]: I0223 00:06:53.950084 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:53 crc kubenswrapper[4953]: I0223 00:06:53.951083 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:53 crc kubenswrapper[4953]: I0223 00:06:53.951120 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:53 crc kubenswrapper[4953]: I0223 00:06:53.951131 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:54 crc kubenswrapper[4953]: I0223 00:06:54.242881 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 16:06:47.146430397 +0000 UTC Feb 23 00:06:55 crc kubenswrapper[4953]: I0223 00:06:55.243312 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 00:34:05.31623624 +0000 UTC Feb 23 00:06:55 crc kubenswrapper[4953]: I0223 00:06:55.925534 4953 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 00:06:55 crc kubenswrapper[4953]: I0223 00:06:55.925610 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 00:06:56 crc kubenswrapper[4953]: I0223 00:06:56.244400 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 05:39:50.81933801 +0000 UTC Feb 23 00:06:57 crc kubenswrapper[4953]: I0223 00:06:57.244868 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 17:12:24.878432395 +0000 UTC Feb 23 00:06:58 crc kubenswrapper[4953]: I0223 00:06:58.244979 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:42:54.166994636 +0000 UTC Feb 23 00:06:58 crc kubenswrapper[4953]: I0223 00:06:58.777146 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:06:58 crc kubenswrapper[4953]: I0223 00:06:58.777259 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:06:58 crc kubenswrapper[4953]: I0223 00:06:58.778803 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:06:58 crc kubenswrapper[4953]: I0223 00:06:58.778838 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:06:58 crc kubenswrapper[4953]: I0223 00:06:58.778846 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:06:59 crc kubenswrapper[4953]: E0223 00:06:59.211515 4953 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 23 00:06:59 crc kubenswrapper[4953]: I0223 00:06:59.236161 4953 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 23 00:06:59 crc kubenswrapper[4953]: I0223 00:06:59.236232 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 23 00:06:59 crc kubenswrapper[4953]: I0223 00:06:59.240411 4953 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 23 00:06:59 crc kubenswrapper[4953]: I0223 00:06:59.240443 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 23 00:06:59 crc kubenswrapper[4953]: I0223 00:06:59.245552 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 14:42:39.783973248 +0000 UTC Feb 23 00:07:00 crc kubenswrapper[4953]: I0223 00:07:00.245862 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 15:19:32.086785347 +0000 UTC Feb 23 00:07:00 crc kubenswrapper[4953]: I0223 00:07:00.703818 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 23 00:07:00 crc kubenswrapper[4953]: I0223 00:07:00.703954 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:07:00 crc kubenswrapper[4953]: I0223 00:07:00.705078 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:00 crc kubenswrapper[4953]: I0223 00:07:00.705121 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:00 crc kubenswrapper[4953]: I0223 00:07:00.705130 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:00 crc kubenswrapper[4953]: I0223 00:07:00.722708 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 23 00:07:01 crc kubenswrapper[4953]: I0223 00:07:01.246826 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 07:16:50.327549865 +0000 UTC Feb 23 00:07:01 crc kubenswrapper[4953]: I0223 00:07:01.437107 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:07:01 crc kubenswrapper[4953]: I0223 00:07:01.438818 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:01 crc kubenswrapper[4953]: I0223 00:07:01.438888 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:01 crc kubenswrapper[4953]: I0223 00:07:01.438910 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:01 crc kubenswrapper[4953]: I0223 00:07:01.717445 4953 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 23 00:07:01 crc kubenswrapper[4953]: I0223 00:07:01.717862 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 23 00:07:02 crc kubenswrapper[4953]: I0223 00:07:02.247377 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 07:07:57.316365822 +0000 UTC Feb 23 00:07:03 crc kubenswrapper[4953]: I0223 00:07:03.218886 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:07:03 crc kubenswrapper[4953]: I0223 00:07:03.219133 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:07:03 crc kubenswrapper[4953]: I0223 00:07:03.219481 4953 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 23 00:07:03 crc kubenswrapper[4953]: I0223 00:07:03.219551 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 23 00:07:03 crc kubenswrapper[4953]: I0223 00:07:03.220609 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:03 crc kubenswrapper[4953]: I0223 00:07:03.220629 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:03 crc kubenswrapper[4953]: I0223 00:07:03.220637 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:03 crc kubenswrapper[4953]: I0223 00:07:03.222950 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:07:03 crc kubenswrapper[4953]: I0223 00:07:03.247556 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 00:58:02.179354393 +0000 UTC Feb 23 00:07:03 crc kubenswrapper[4953]: E0223 00:07:03.421940 4953 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 00:07:03 crc kubenswrapper[4953]: I0223 00:07:03.442574 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:07:03 crc kubenswrapper[4953]: I0223 00:07:03.443268 4953 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 23 00:07:03 crc kubenswrapper[4953]: I0223 00:07:03.443370 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 23 00:07:03 crc kubenswrapper[4953]: I0223 00:07:03.444276 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:03 crc kubenswrapper[4953]: I0223 00:07:03.444384 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:03 crc kubenswrapper[4953]: I0223 00:07:03.444405 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:03 crc kubenswrapper[4953]: I0223 00:07:03.637473 4953 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 23 00:07:03 crc kubenswrapper[4953]: I0223 00:07:03.637553 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 23 00:07:04 crc kubenswrapper[4953]: E0223 00:07:04.228391 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 23 00:07:04 crc kubenswrapper[4953]: I0223 00:07:04.230420 4953 trace.go:236] Trace[2124950883]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Feb-2026 00:06:50.470) (total time: 13759ms): Feb 23 00:07:04 crc kubenswrapper[4953]: Trace[2124950883]: ---"Objects listed" error: 13759ms (00:07:04.230) Feb 23 00:07:04 crc kubenswrapper[4953]: Trace[2124950883]: [13.759810962s] [13.759810962s] END Feb 23 00:07:04 crc kubenswrapper[4953]: I0223 00:07:04.230447 4953 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 00:07:04 crc kubenswrapper[4953]: I0223 00:07:04.230880 4953 trace.go:236] Trace[834813248]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Feb-2026 00:06:49.554) (total time: 14676ms): Feb 23 00:07:04 crc kubenswrapper[4953]: Trace[834813248]: ---"Objects listed" error: 14676ms (00:07:04.230) Feb 23 00:07:04 crc kubenswrapper[4953]: Trace[834813248]: [14.676505703s] [14.676505703s] END Feb 23 00:07:04 crc kubenswrapper[4953]: I0223 00:07:04.230901 4953 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 00:07:04 crc kubenswrapper[4953]: E0223 00:07:04.232402 4953 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 23 00:07:04 crc kubenswrapper[4953]: I0223 00:07:04.232565 4953 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 23 00:07:04 crc kubenswrapper[4953]: I0223 00:07:04.232719 4953 trace.go:236] Trace[2128244029]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Feb-2026 00:06:52.487) (total time: 11744ms): Feb 23 00:07:04 crc kubenswrapper[4953]: Trace[2128244029]: ---"Objects listed" error: 11744ms (00:07:04.232) Feb 23 00:07:04 crc kubenswrapper[4953]: Trace[2128244029]: [11.744650158s] [11.744650158s] END Feb 23 00:07:04 crc kubenswrapper[4953]: I0223 00:07:04.232858 4953 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 00:07:04 crc kubenswrapper[4953]: I0223 00:07:04.237768 4953 trace.go:236] Trace[794624023]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Feb-2026 00:06:52.241) (total time: 11996ms): Feb 23 00:07:04 crc kubenswrapper[4953]: Trace[794624023]: ---"Objects listed" error: 11996ms (00:07:04.237) Feb 23 00:07:04 crc kubenswrapper[4953]: Trace[794624023]: [11.996487442s] [11.996487442s] END Feb 23 00:07:04 crc kubenswrapper[4953]: I0223 00:07:04.237833 4953 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 00:07:04 crc kubenswrapper[4953]: I0223 00:07:04.248099 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 08:06:45.752690493 +0000 UTC Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.147159 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.157439 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.213998 4953 apiserver.go:52] "Watching apiserver" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.216149 4953 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.218025 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-image-registry/node-ca-4jfxl","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.219142 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4jfxl" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.219563 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.219600 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.219636 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.219681 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.219705 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.219761 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.219859 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.219944 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.219972 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.220465 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.222060 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.222146 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.226708 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.227054 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.227086 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.227197 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.227197 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.227548 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.229581 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.231026 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.232161 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.232211 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.237151 4953 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.248197 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:35:14.525442563 +0000 UTC Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.277359 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.300925 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.317919 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.336844 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.336881 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.336899 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.336914 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.336932 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.336950 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.336966 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.336990 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337006 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337021 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337034 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337050 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337066 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337081 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337099 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337138 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337153 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337171 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337187 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337202 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337217 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337234 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337267 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337302 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337326 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337343 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337360 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337379 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337410 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337431 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337447 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337462 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337484 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337501 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337518 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337536 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337555 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337586 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337601 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337615 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337630 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337645 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337662 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337682 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337702 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337728 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337759 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337774 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337791 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337805 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337819 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337834 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337848 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337862 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337877 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337892 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337908 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337923 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337951 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337968 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.337983 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338001 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338017 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338034 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338051 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338068 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338084 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338100 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338122 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338144 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338165 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338187 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338207 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338230 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338253 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338271 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338304 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338321 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338372 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338396 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338420 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338445 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338464 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338484 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338500 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338516 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338541 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338555 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338570 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338584 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338604 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338626 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338649 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338668 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338683 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338705 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338728 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338748 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338769 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338790 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338807 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338822 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338836 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338849 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338865 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338885 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338907 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338931 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338954 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338976 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.338998 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339021 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339046 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339069 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339091 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339115 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339140 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339163 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339183 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339203 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339218 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339239 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339255 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339270 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339305 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339322 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339338 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339355 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339373 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339388 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339403 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339501 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339520 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339536 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339552 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339568 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339588 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339604 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339619 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339635 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339650 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339665 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339680 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339695 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339712 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339728 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339743 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339758 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339775 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339791 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339809 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339826 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339844 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339861 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339879 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339897 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339920 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339943 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339963 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339979 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.339996 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340027 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340055 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340073 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340093 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340109 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340126 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340147 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340168 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340184 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340202 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340218 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340234 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340250 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340266 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340301 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340327 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340350 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340370 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340391 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340414 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340439 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340666 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340692 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340717 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340743 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340763 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340798 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340825 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340850 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340875 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.340900 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341255 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341319 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341342 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341490 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341529 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/920835de-d258-45c0-beaa-c478dddb38e9-host\") pod \"node-ca-4jfxl\" (UID: \"920835de-d258-45c0-beaa-c478dddb38e9\") " pod="openshift-image-registry/node-ca-4jfxl" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341560 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341584 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341607 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341658 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341677 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/920835de-d258-45c0-beaa-c478dddb38e9-serviceca\") pod \"node-ca-4jfxl\" (UID: \"920835de-d258-45c0-beaa-c478dddb38e9\") " pod="openshift-image-registry/node-ca-4jfxl" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341696 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341717 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341737 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341803 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341824 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341845 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341869 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341890 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341951 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.341969 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7wx2\" (UniqueName: \"kubernetes.io/projected/920835de-d258-45c0-beaa-c478dddb38e9-kube-api-access-f7wx2\") pod \"node-ca-4jfxl\" (UID: \"920835de-d258-45c0-beaa-c478dddb38e9\") " pod="openshift-image-registry/node-ca-4jfxl" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.346918 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.347009 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.347040 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.347357 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.347470 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.347486 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.347605 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.347632 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.347644 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.347483 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.347698 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.347769 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.347807 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.347900 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.348127 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.348180 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.348306 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.348371 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.348450 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.348585 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.348642 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.348764 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.348787 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.348844 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.348902 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.348984 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.349093 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.349182 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.349197 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.349260 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.349514 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.349533 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.349526 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.349749 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.349858 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.351159 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.351341 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.351462 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.351927 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.352048 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.352354 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.352353 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.352592 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.352795 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.352948 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.352977 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.353181 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.353184 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.353718 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.353944 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.355368 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.357031 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.357622 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.358157 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.358327 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.358512 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.358705 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.358888 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.359077 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.359229 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.359366 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.359388 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.359576 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.359508 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.360493 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.360556 4953 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.360647 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:05.860617494 +0000 UTC m=+23.794459330 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.360894 4953 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.360972 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.361226 4953 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.357802 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.361484 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.361973 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.362465 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.362820 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.363308 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.363328 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.363406 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.363597 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.363915 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.364306 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.364315 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.364448 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.367133 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.367218 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.367655 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.367745 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.367965 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.368323 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.368460 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:05.868435462 +0000 UTC m=+23.802277308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.368788 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.368840 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:07:05.868828271 +0000 UTC m=+23.802670117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.368875 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.368992 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.369106 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.369715 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.369839 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.369808 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.370095 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.370168 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.370540 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.370550 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.370597 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.370825 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.370847 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.370854 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.371357 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.371483 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.371550 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.371695 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.371886 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.371924 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.372053 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.377825 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.379103 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.379207 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.378986 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.379527 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.380740 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.381169 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.381236 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.381265 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.381261 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.381489 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.381553 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.381556 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.381563 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.381663 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.381834 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.382210 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.382260 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.382466 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.382549 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.382729 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.382851 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.382889 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.383172 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.383198 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.383219 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.383263 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.386069 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.386632 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.388633 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.389645 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.390073 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.390299 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.391833 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.392078 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.392326 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.392736 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.392733 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.393212 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.393333 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.393341 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.393414 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.393608 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.394127 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.394368 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.394395 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.394413 4953 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.394494 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.394512 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:05.894486727 +0000 UTC m=+23.828328763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.394561 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.394580 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.394591 4953 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.394626 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:05.89461733 +0000 UTC m=+23.828459406 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.394761 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.394762 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.394840 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.394991 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.395115 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.400539 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.404503 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.406084 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.407610 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.407751 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.407841 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.408434 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.409014 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.409420 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.409456 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.409498 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.409766 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.409838 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.409992 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.410455 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.411534 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.411621 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.411660 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.412048 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.412612 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.412656 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.412739 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.412989 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.413077 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.413175 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.413156 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.413241 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.414516 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.415334 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.415487 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.417263 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.417925 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.417987 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.419532 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.419620 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.419707 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.419767 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.419905 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.419977 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.420033 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.420140 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.420471 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.421326 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.419976 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.422158 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.432710 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.442623 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.443712 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.443926 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.443985 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/920835de-d258-45c0-beaa-c478dddb38e9-serviceca\") pod \"node-ca-4jfxl\" (UID: \"920835de-d258-45c0-beaa-c478dddb38e9\") " pod="openshift-image-registry/node-ca-4jfxl" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444030 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444075 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7wx2\" (UniqueName: \"kubernetes.io/projected/920835de-d258-45c0-beaa-c478dddb38e9-kube-api-access-f7wx2\") pod \"node-ca-4jfxl\" (UID: \"920835de-d258-45c0-beaa-c478dddb38e9\") " pod="openshift-image-registry/node-ca-4jfxl" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444099 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/920835de-d258-45c0-beaa-c478dddb38e9-host\") pod \"node-ca-4jfxl\" (UID: \"920835de-d258-45c0-beaa-c478dddb38e9\") " pod="openshift-image-registry/node-ca-4jfxl" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444160 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444178 4953 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444192 4953 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444215 4953 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444227 4953 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444241 4953 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444253 4953 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444265 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444277 4953 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444305 4953 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444318 4953 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444330 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444341 4953 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444352 4953 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444363 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444375 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444388 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444399 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444410 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444421 4953 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444432 4953 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444444 4953 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444455 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444466 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444478 4953 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444489 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444501 4953 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444513 4953 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444525 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444536 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444547 4953 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444558 4953 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444568 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444579 4953 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444590 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444601 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444612 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444622 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444632 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444643 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444653 4953 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444667 4953 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444678 4953 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444691 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444704 4953 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444730 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444742 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444753 4953 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444763 4953 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444774 4953 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444786 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444798 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444809 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444820 4953 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444831 4953 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444842 4953 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444868 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444879 4953 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444890 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444901 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444912 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444923 4953 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444934 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444944 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444956 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444968 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444980 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.444991 4953 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445002 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445011 4953 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445019 4953 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445030 4953 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445041 4953 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445052 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445064 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445075 4953 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445086 4953 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445097 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445107 4953 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445117 4953 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445130 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445141 4953 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445151 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445161 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445171 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445182 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445193 4953 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445203 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445213 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445227 4953 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445238 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445222 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445317 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/920835de-d258-45c0-beaa-c478dddb38e9-host\") pod \"node-ca-4jfxl\" (UID: \"920835de-d258-45c0-beaa-c478dddb38e9\") " pod="openshift-image-registry/node-ca-4jfxl" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445251 4953 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445388 4953 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445398 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445410 4953 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: W0223 00:07:05.445410 4953 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445420 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445429 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445440 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445451 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445461 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445470 4953 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445480 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445488 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445498 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445507 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445515 4953 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445525 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445536 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445546 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445555 4953 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445564 4953 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445573 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445581 4953 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445590 4953 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445598 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445607 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445615 4953 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445624 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445633 4953 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445641 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445650 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445659 4953 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445667 4953 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445678 4953 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445686 4953 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445695 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445703 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445720 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445729 4953 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445738 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445746 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445755 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445763 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445771 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445780 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445789 4953 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445799 4953 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445808 4953 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445817 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445828 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445840 4953 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445849 4953 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445858 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445866 4953 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445875 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445884 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445893 4953 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445903 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445911 4953 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445919 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445927 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445937 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445945 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445954 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445963 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445972 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445981 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445990 4953 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446000 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446009 4953 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446018 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446026 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446035 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446043 4953 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446051 4953 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446060 4953 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446069 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446078 4953 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446086 4953 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446094 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446102 4953 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446110 4953 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446121 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446130 4953 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446138 4953 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446148 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446157 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446165 4953 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446172 4953 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446182 4953 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446190 4953 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446198 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446206 4953 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446215 4953 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446223 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446232 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446242 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446251 4953 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446260 4953 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446269 4953 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446277 4953 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445469 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.445427 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446387 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.446498 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/920835de-d258-45c0-beaa-c478dddb38e9-serviceca\") pod \"node-ca-4jfxl\" (UID: \"920835de-d258-45c0-beaa-c478dddb38e9\") " pod="openshift-image-registry/node-ca-4jfxl" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.460232 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.462722 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.463545 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.464061 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.468342 4953 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b" exitCode=255 Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.468422 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b"} Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.468500 4953 scope.go:117] "RemoveContainer" containerID="26a421e3b26879d1018eab474ba5f87e79c01c130a3f061bf70d05ec7e097d67" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.468991 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7wx2\" (UniqueName: \"kubernetes.io/projected/920835de-d258-45c0-beaa-c478dddb38e9-kube-api-access-f7wx2\") pod \"node-ca-4jfxl\" (UID: \"920835de-d258-45c0-beaa-c478dddb38e9\") " pod="openshift-image-registry/node-ca-4jfxl" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.470056 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.476597 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.483266 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.494999 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.504647 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.515333 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.526505 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.535612 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.535810 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4jfxl" Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.542595 4953 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.548707 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.549028 4953 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.549062 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.549072 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.554222 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.554318 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.559975 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 00:07:05 crc kubenswrapper[4953]: W0223 00:07:05.568659 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-60e67d18fbc8a4b244728f5ad8b02a6fd045f6a58b4df9b1696be8f458452dba WatchSource:0}: Error finding container 60e67d18fbc8a4b244728f5ad8b02a6fd045f6a58b4df9b1696be8f458452dba: Status 404 returned error can't find the container with id 60e67d18fbc8a4b244728f5ad8b02a6fd045f6a58b4df9b1696be8f458452dba Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.573569 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: W0223 00:07:05.582969 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-bb213dc4bc5c6aac4f2f03150b05cdcb038aaef661f727032d18d90679b7d3da WatchSource:0}: Error finding container bb213dc4bc5c6aac4f2f03150b05cdcb038aaef661f727032d18d90679b7d3da: Status 404 returned error can't find the container with id bb213dc4bc5c6aac4f2f03150b05cdcb038aaef661f727032d18d90679b7d3da Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.646462 4953 scope.go:117] "RemoveContainer" containerID="29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b" Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.646714 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.648112 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.648489 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-sqwrp"] Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.648852 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-sqwrp" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.652264 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.652523 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.653383 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.663160 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.674504 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.682452 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.691172 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.703029 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.716773 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a421e3b26879d1018eab474ba5f87e79c01c130a3f061bf70d05ec7e097d67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:06:48Z\\\",\\\"message\\\":\\\"W0223 00:06:47.472657 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 00:06:47.474677 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771805207 cert, and key in /tmp/serving-cert-599428886/serving-signer.crt, /tmp/serving-cert-599428886/serving-signer.key\\\\nI0223 00:06:48.134431 1 observer_polling.go:159] Starting file observer\\\\nW0223 00:06:48.137125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0223 00:06:48.137301 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:06:48.138031 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-599428886/tls.crt::/tmp/serving-cert-599428886/tls.key\\\\\\\"\\\\nF0223 00:06:48.518995 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.727267 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.737576 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.749910 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.750309 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cae958ea-e590-41fb-a965-b4d17d18002c-hosts-file\") pod \"node-resolver-sqwrp\" (UID: \"cae958ea-e590-41fb-a965-b4d17d18002c\") " pod="openshift-dns/node-resolver-sqwrp" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.750335 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5x59\" (UniqueName: \"kubernetes.io/projected/cae958ea-e590-41fb-a965-b4d17d18002c-kube-api-access-k5x59\") pod \"node-resolver-sqwrp\" (UID: \"cae958ea-e590-41fb-a965-b4d17d18002c\") " pod="openshift-dns/node-resolver-sqwrp" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.759666 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.851512 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cae958ea-e590-41fb-a965-b4d17d18002c-hosts-file\") pod \"node-resolver-sqwrp\" (UID: \"cae958ea-e590-41fb-a965-b4d17d18002c\") " pod="openshift-dns/node-resolver-sqwrp" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.851559 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5x59\" (UniqueName: \"kubernetes.io/projected/cae958ea-e590-41fb-a965-b4d17d18002c-kube-api-access-k5x59\") pod \"node-resolver-sqwrp\" (UID: \"cae958ea-e590-41fb-a965-b4d17d18002c\") " pod="openshift-dns/node-resolver-sqwrp" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.851687 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cae958ea-e590-41fb-a965-b4d17d18002c-hosts-file\") pod \"node-resolver-sqwrp\" (UID: \"cae958ea-e590-41fb-a965-b4d17d18002c\") " pod="openshift-dns/node-resolver-sqwrp" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.874371 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5x59\" (UniqueName: \"kubernetes.io/projected/cae958ea-e590-41fb-a965-b4d17d18002c-kube-api-access-k5x59\") pod \"node-resolver-sqwrp\" (UID: \"cae958ea-e590-41fb-a965-b4d17d18002c\") " pod="openshift-dns/node-resolver-sqwrp" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.952333 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.952414 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.952438 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.952459 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.952481 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.952586 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.952603 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.952614 4953 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.952641 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.952665 4953 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.952674 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.952690 4953 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.952586 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:07:06.952559031 +0000 UTC m=+24.886400877 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.952721 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:06.952713805 +0000 UTC m=+24.886555651 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.952732 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:06.952726875 +0000 UTC m=+24.886568721 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.952756 4953 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.952768 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:06.952737185 +0000 UTC m=+24.886579031 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:05 crc kubenswrapper[4953]: E0223 00:07:05.952795 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:06.952779516 +0000 UTC m=+24.886621362 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 00:07:05 crc kubenswrapper[4953]: I0223 00:07:05.964677 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-sqwrp" Feb 23 00:07:05 crc kubenswrapper[4953]: W0223 00:07:05.977306 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcae958ea_e590_41fb_a965_b4d17d18002c.slice/crio-4b360476921cbc365951b7be40e2321f234c6cb2b711fd2891cce61b5d3eca66 WatchSource:0}: Error finding container 4b360476921cbc365951b7be40e2321f234c6cb2b711fd2891cce61b5d3eca66: Status 404 returned error can't find the container with id 4b360476921cbc365951b7be40e2321f234c6cb2b711fd2891cce61b5d3eca66 Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.024051 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-pxzfb"] Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.024342 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.024991 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-gpl86"] Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.025207 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.025479 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-dw5dv"] Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.025971 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.026220 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.027316 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.027361 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.027851 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.028323 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.028357 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.028405 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.028365 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.028539 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.029025 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.030609 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.031509 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.045899 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.077031 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.093001 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.112771 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.143429 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155076 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-multus-daemon-config\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155135 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-etc-kubernetes\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155166 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c67r6\" (UniqueName: \"kubernetes.io/projected/b59193de-17ea-458e-9569-6881173e66e8-kube-api-access-c67r6\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155199 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-os-release\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155225 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-hostroot\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155249 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-multus-conf-dir\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155311 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b59193de-17ea-458e-9569-6881173e66e8-system-cni-dir\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155350 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-host-var-lib-cni-bin\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155377 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w46n9\" (UniqueName: \"kubernetes.io/projected/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-kube-api-access-w46n9\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155405 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b59193de-17ea-458e-9569-6881173e66e8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155431 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fca7afa5-1274-436d-ab61-9e8796e4774c-proxy-tls\") pod \"machine-config-daemon-gpl86\" (UID: \"fca7afa5-1274-436d-ab61-9e8796e4774c\") " pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155455 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-host-var-lib-cni-multus\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155483 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-host-run-multus-certs\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155517 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-cni-binary-copy\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155545 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-host-var-lib-kubelet\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155582 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fca7afa5-1274-436d-ab61-9e8796e4774c-mcd-auth-proxy-config\") pod \"machine-config-daemon-gpl86\" (UID: \"fca7afa5-1274-436d-ab61-9e8796e4774c\") " pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155614 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b59193de-17ea-458e-9569-6881173e66e8-cni-binary-copy\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155651 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-multus-socket-dir-parent\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155674 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-multus-cni-dir\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155695 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-cnibin\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155718 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-host-run-netns\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155740 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b59193de-17ea-458e-9569-6881173e66e8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155762 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fca7afa5-1274-436d-ab61-9e8796e4774c-rootfs\") pod \"machine-config-daemon-gpl86\" (UID: \"fca7afa5-1274-436d-ab61-9e8796e4774c\") " pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155812 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpkt4\" (UniqueName: \"kubernetes.io/projected/fca7afa5-1274-436d-ab61-9e8796e4774c-kube-api-access-xpkt4\") pod \"machine-config-daemon-gpl86\" (UID: \"fca7afa5-1274-436d-ab61-9e8796e4774c\") " pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155843 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-host-run-k8s-cni-cncf-io\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155871 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b59193de-17ea-458e-9569-6881173e66e8-os-release\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155895 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-system-cni-dir\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.155918 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b59193de-17ea-458e-9569-6881173e66e8-cnibin\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.163673 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.191474 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a421e3b26879d1018eab474ba5f87e79c01c130a3f061bf70d05ec7e097d67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:06:48Z\\\",\\\"message\\\":\\\"W0223 00:06:47.472657 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 00:06:47.474677 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771805207 cert, and key in /tmp/serving-cert-599428886/serving-signer.crt, /tmp/serving-cert-599428886/serving-signer.key\\\\nI0223 00:06:48.134431 1 observer_polling.go:159] Starting file observer\\\\nW0223 00:06:48.137125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0223 00:06:48.137301 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:06:48.138031 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-599428886/tls.crt::/tmp/serving-cert-599428886/tls.key\\\\\\\"\\\\nF0223 00:06:48.518995 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.201477 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.210261 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.217406 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.227090 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.237758 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a421e3b26879d1018eab474ba5f87e79c01c130a3f061bf70d05ec7e097d67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:06:48Z\\\",\\\"message\\\":\\\"W0223 00:06:47.472657 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 00:06:47.474677 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771805207 cert, and key in /tmp/serving-cert-599428886/serving-signer.crt, /tmp/serving-cert-599428886/serving-signer.key\\\\nI0223 00:06:48.134431 1 observer_polling.go:159] Starting file observer\\\\nW0223 00:06:48.137125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0223 00:06:48.137301 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:06:48.138031 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-599428886/tls.crt::/tmp/serving-cert-599428886/tls.key\\\\\\\"\\\\nF0223 00:06:48.518995 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.248841 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 01:33:49.891204295 +0000 UTC Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.249805 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.256670 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w46n9\" (UniqueName: \"kubernetes.io/projected/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-kube-api-access-w46n9\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.256703 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b59193de-17ea-458e-9569-6881173e66e8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.256720 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fca7afa5-1274-436d-ab61-9e8796e4774c-proxy-tls\") pod \"machine-config-daemon-gpl86\" (UID: \"fca7afa5-1274-436d-ab61-9e8796e4774c\") " pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.256741 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-host-var-lib-cni-multus\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.256758 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-host-run-multus-certs\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.256778 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-cni-binary-copy\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.256794 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-host-var-lib-kubelet\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.256822 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b59193de-17ea-458e-9569-6881173e66e8-cni-binary-copy\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.256837 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fca7afa5-1274-436d-ab61-9e8796e4774c-mcd-auth-proxy-config\") pod \"machine-config-daemon-gpl86\" (UID: \"fca7afa5-1274-436d-ab61-9e8796e4774c\") " pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.256863 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-multus-socket-dir-parent\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.256879 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-multus-cni-dir\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.256893 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-cnibin\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.256892 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-host-var-lib-cni-multus\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.256934 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-host-run-netns\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.256909 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-host-run-netns\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.256986 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-host-run-k8s-cni-cncf-io\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257016 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b59193de-17ea-458e-9569-6881173e66e8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257038 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fca7afa5-1274-436d-ab61-9e8796e4774c-rootfs\") pod \"machine-config-daemon-gpl86\" (UID: \"fca7afa5-1274-436d-ab61-9e8796e4774c\") " pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257056 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpkt4\" (UniqueName: \"kubernetes.io/projected/fca7afa5-1274-436d-ab61-9e8796e4774c-kube-api-access-xpkt4\") pod \"machine-config-daemon-gpl86\" (UID: \"fca7afa5-1274-436d-ab61-9e8796e4774c\") " pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257074 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b59193de-17ea-458e-9569-6881173e66e8-os-release\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257095 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b59193de-17ea-458e-9569-6881173e66e8-cnibin\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257114 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-system-cni-dir\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257136 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c67r6\" (UniqueName: \"kubernetes.io/projected/b59193de-17ea-458e-9569-6881173e66e8-kube-api-access-c67r6\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257162 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-multus-daemon-config\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257178 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-etc-kubernetes\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257196 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-os-release\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257224 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b59193de-17ea-458e-9569-6881173e66e8-system-cni-dir\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257240 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-host-var-lib-cni-bin\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257258 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-hostroot\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257275 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-multus-conf-dir\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257343 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-multus-conf-dir\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257368 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-host-run-k8s-cni-cncf-io\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257422 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b59193de-17ea-458e-9569-6881173e66e8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257462 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b59193de-17ea-458e-9569-6881173e66e8-cni-binary-copy\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257484 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-host-run-multus-certs\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257466 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-host-var-lib-kubelet\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257535 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fca7afa5-1274-436d-ab61-9e8796e4774c-mcd-auth-proxy-config\") pod \"machine-config-daemon-gpl86\" (UID: \"fca7afa5-1274-436d-ab61-9e8796e4774c\") " pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257580 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-cnibin\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257643 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-multus-cni-dir\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257678 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fca7afa5-1274-436d-ab61-9e8796e4774c-rootfs\") pod \"machine-config-daemon-gpl86\" (UID: \"fca7afa5-1274-436d-ab61-9e8796e4774c\") " pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257652 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-multus-socket-dir-parent\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257736 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-system-cni-dir\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257739 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-host-var-lib-cni-bin\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257746 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-cni-binary-copy\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257782 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-etc-kubernetes\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257793 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-hostroot\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257751 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b59193de-17ea-458e-9569-6881173e66e8-os-release\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257750 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-os-release\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257747 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b59193de-17ea-458e-9569-6881173e66e8-system-cni-dir\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.257707 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b59193de-17ea-458e-9569-6881173e66e8-cnibin\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.258197 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-multus-daemon-config\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.261587 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.263149 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b59193de-17ea-458e-9569-6881173e66e8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.269616 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.280679 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.287607 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fca7afa5-1274-436d-ab61-9e8796e4774c-proxy-tls\") pod \"machine-config-daemon-gpl86\" (UID: \"fca7afa5-1274-436d-ab61-9e8796e4774c\") " pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.287740 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c67r6\" (UniqueName: \"kubernetes.io/projected/b59193de-17ea-458e-9569-6881173e66e8-kube-api-access-c67r6\") pod \"multus-additional-cni-plugins-dw5dv\" (UID: \"b59193de-17ea-458e-9569-6881173e66e8\") " pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.287743 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w46n9\" (UniqueName: \"kubernetes.io/projected/c6ae22b1-a5f9-483a-be3d-32cfb7d516d5-kube-api-access-w46n9\") pod \"multus-pxzfb\" (UID: \"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\") " pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.287798 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpkt4\" (UniqueName: \"kubernetes.io/projected/fca7afa5-1274-436d-ab61-9e8796e4774c-kube-api-access-xpkt4\") pod \"machine-config-daemon-gpl86\" (UID: \"fca7afa5-1274-436d-ab61-9e8796e4774c\") " pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.293142 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.303514 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.313949 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.324874 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.326068 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:06 crc kubenswrapper[4953]: E0223 00:07:06.326232 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.333904 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.341069 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-pxzfb" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.341088 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.351445 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.352229 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.357468 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.367891 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: W0223 00:07:06.387036 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb59193de_17ea_458e_9569_6881173e66e8.slice/crio-b66bdc437579a84de29b5f8b0fb478c235b9d6b0f87606ec6f5b1ae54284f842 WatchSource:0}: Error finding container b66bdc437579a84de29b5f8b0fb478c235b9d6b0f87606ec6f5b1ae54284f842: Status 404 returned error can't find the container with id b66bdc437579a84de29b5f8b0fb478c235b9d6b0f87606ec6f5b1ae54284f842 Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.460826 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-69mr8"] Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.464259 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.467205 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.468445 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.468733 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.468736 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.468774 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.468847 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.469194 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.472967 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9"} Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.473012 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"60e67d18fbc8a4b244728f5ad8b02a6fd045f6a58b4df9b1696be8f458452dba"} Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.474557 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pxzfb" event={"ID":"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5","Type":"ContainerStarted","Data":"101e5558543e67064cd1a45267a82ef66b318f38d7aeda978db1ed83a4dd60cb"} Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.477582 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.477779 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.481058 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" event={"ID":"b59193de-17ea-458e-9569-6881173e66e8","Type":"ContainerStarted","Data":"b66bdc437579a84de29b5f8b0fb478c235b9d6b0f87606ec6f5b1ae54284f842"} Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.482302 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-sqwrp" event={"ID":"cae958ea-e590-41fb-a965-b4d17d18002c","Type":"ContainerStarted","Data":"866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97"} Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.482338 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-sqwrp" event={"ID":"cae958ea-e590-41fb-a965-b4d17d18002c","Type":"ContainerStarted","Data":"4b360476921cbc365951b7be40e2321f234c6cb2b711fd2891cce61b5d3eca66"} Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.483482 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" event={"ID":"fca7afa5-1274-436d-ab61-9e8796e4774c","Type":"ContainerStarted","Data":"ada29f9ece14a7325c33e614b6fb237b5ed6dde9f188e9b2aa7b517da4c985d8"} Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.484441 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"bb213dc4bc5c6aac4f2f03150b05cdcb038aaef661f727032d18d90679b7d3da"} Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.485309 4953 scope.go:117] "RemoveContainer" containerID="29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b" Feb 23 00:07:06 crc kubenswrapper[4953]: E0223 00:07:06.485473 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.488634 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.488872 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce"} Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.488926 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a"} Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.488943 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c287786e76f1501e4a99269f414058eba1be9cd54ceebc32a5df1c2176b4e70f"} Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.493729 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4jfxl" event={"ID":"920835de-d258-45c0-beaa-c478dddb38e9","Type":"ContainerStarted","Data":"cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633"} Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.493773 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4jfxl" event={"ID":"920835de-d258-45c0-beaa-c478dddb38e9","Type":"ContainerStarted","Data":"ab32ce2c3f31da044b458831d5fe0333be384ae2917510ab203d6687072b10dc"} Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.498864 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.510434 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.522881 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.537274 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a421e3b26879d1018eab474ba5f87e79c01c130a3f061bf70d05ec7e097d67\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:06:48Z\\\",\\\"message\\\":\\\"W0223 00:06:47.472657 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 00:06:47.474677 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771805207 cert, and key in /tmp/serving-cert-599428886/serving-signer.crt, /tmp/serving-cert-599428886/serving-signer.key\\\\nI0223 00:06:48.134431 1 observer_polling.go:159] Starting file observer\\\\nW0223 00:06:48.137125 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0223 00:06:48.137301 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:06:48.138031 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-599428886/tls.crt::/tmp/serving-cert-599428886/tls.key\\\\\\\"\\\\nF0223 00:06:48.518995 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.548717 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563356 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-run-ovn-kubernetes\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563405 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-run-ovn\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563424 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-log-socket\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563444 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-etc-openvswitch\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563464 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-systemd-units\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563481 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563498 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5937f1d2-1966-4337-b099-ad0af539fe11-ovnkube-config\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563514 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5937f1d2-1966-4337-b099-ad0af539fe11-ovnkube-script-lib\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563567 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-var-lib-openvswitch\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563590 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-cni-bin\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563614 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-kubelet\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563630 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-slash\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563649 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-node-log\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563664 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5937f1d2-1966-4337-b099-ad0af539fe11-ovn-node-metrics-cert\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563690 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-run-openvswitch\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563707 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-run-netns\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563721 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x88gj\" (UniqueName: \"kubernetes.io/projected/5937f1d2-1966-4337-b099-ad0af539fe11-kube-api-access-x88gj\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563740 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-run-systemd\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563754 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5937f1d2-1966-4337-b099-ad0af539fe11-env-overrides\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.563775 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-cni-netd\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.585518 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.620441 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.664831 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-slash\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.664875 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-cni-bin\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.664909 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-kubelet\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.664948 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-node-log\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.664968 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5937f1d2-1966-4337-b099-ad0af539fe11-ovn-node-metrics-cert\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.664993 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-slash\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.665020 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-run-openvswitch\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.665053 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-cni-bin\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.665104 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-kubelet\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.665114 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-run-netns\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.665080 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-run-openvswitch\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.665139 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x88gj\" (UniqueName: \"kubernetes.io/projected/5937f1d2-1966-4337-b099-ad0af539fe11-kube-api-access-x88gj\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.666127 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-run-netns\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.666263 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-node-log\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.666491 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-run-systemd\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.666568 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5937f1d2-1966-4337-b099-ad0af539fe11-env-overrides\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.666607 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-cni-netd\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.666619 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-run-systemd\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.666714 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-log-socket\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.666719 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-cni-netd\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.666744 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-run-ovn-kubernetes\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.666807 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-run-ovn\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.666859 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-etc-openvswitch\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.666866 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-run-ovn-kubernetes\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.666892 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-systemd-units\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.666889 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-log-socket\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.666910 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-run-ovn\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.666957 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-etc-openvswitch\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.666991 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-systemd-units\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.666954 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.667039 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.667070 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.667078 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5937f1d2-1966-4337-b099-ad0af539fe11-ovnkube-config\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.667122 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5937f1d2-1966-4337-b099-ad0af539fe11-ovnkube-script-lib\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.667149 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-var-lib-openvswitch\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.667157 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5937f1d2-1966-4337-b099-ad0af539fe11-env-overrides\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.667228 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-var-lib-openvswitch\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.667694 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5937f1d2-1966-4337-b099-ad0af539fe11-ovnkube-config\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.667784 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5937f1d2-1966-4337-b099-ad0af539fe11-ovnkube-script-lib\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.675354 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5937f1d2-1966-4337-b099-ad0af539fe11-ovn-node-metrics-cert\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.689648 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x88gj\" (UniqueName: \"kubernetes.io/projected/5937f1d2-1966-4337-b099-ad0af539fe11-kube-api-access-x88gj\") pod \"ovnkube-node-69mr8\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.725863 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.769396 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.802149 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.824824 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:06 crc kubenswrapper[4953]: W0223 00:07:06.835501 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5937f1d2_1966_4337_b099_ad0af539fe11.slice/crio-e84414805283e28d3634d2810893b8d68514dcdab14288a9712f3c6c8ea6ed54 WatchSource:0}: Error finding container e84414805283e28d3634d2810893b8d68514dcdab14288a9712f3c6c8ea6ed54: Status 404 returned error can't find the container with id e84414805283e28d3634d2810893b8d68514dcdab14288a9712f3c6c8ea6ed54 Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.843508 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.885960 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.924370 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.963383 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.969847 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.969953 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:06 crc kubenswrapper[4953]: E0223 00:07:06.970042 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:07:08.9700102 +0000 UTC m=+26.903852086 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:07:06 crc kubenswrapper[4953]: E0223 00:07:06.970084 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 00:07:06 crc kubenswrapper[4953]: E0223 00:07:06.970105 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 00:07:06 crc kubenswrapper[4953]: E0223 00:07:06.970118 4953 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.970125 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:06 crc kubenswrapper[4953]: E0223 00:07:06.970169 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:08.970152403 +0000 UTC m=+26.903994339 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.970193 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:06 crc kubenswrapper[4953]: E0223 00:07:06.970216 4953 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 00:07:06 crc kubenswrapper[4953]: E0223 00:07:06.970258 4953 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 00:07:06 crc kubenswrapper[4953]: E0223 00:07:06.970262 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:08.970252056 +0000 UTC m=+26.904094012 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 00:07:06 crc kubenswrapper[4953]: I0223 00:07:06.970217 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:06 crc kubenswrapper[4953]: E0223 00:07:06.970280 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:08.970274396 +0000 UTC m=+26.904116242 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 00:07:06 crc kubenswrapper[4953]: E0223 00:07:06.970339 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 00:07:06 crc kubenswrapper[4953]: E0223 00:07:06.970349 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 00:07:06 crc kubenswrapper[4953]: E0223 00:07:06.970355 4953 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:06 crc kubenswrapper[4953]: E0223 00:07:06.970384 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:08.970375959 +0000 UTC m=+26.904217805 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.002204 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.044782 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.086204 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.123458 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.165529 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.207592 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.248983 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 18:18:27.874371841 +0000 UTC Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.251681 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.288762 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.325872 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.325957 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:07 crc kubenswrapper[4953]: E0223 00:07:07.326074 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:07 crc kubenswrapper[4953]: E0223 00:07:07.326228 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.328605 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.332957 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.334235 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.336364 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.337493 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.338323 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.338953 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.339807 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.341053 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.341939 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.342454 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.342997 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.343868 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.344508 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.345192 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.345797 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.346361 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.346925 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.347338 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.347908 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.348484 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.348951 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.349522 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.349986 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.350633 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.351071 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.354513 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.355566 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.356040 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.356642 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.357531 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.358009 4953 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.358108 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.360282 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.360852 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.361268 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.363013 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.364078 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.364650 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.365689 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.366450 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.368376 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.368682 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.369455 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.370653 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.372050 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.373152 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.374577 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.375911 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.379094 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.380391 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.381348 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.382395 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.383376 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.384521 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.385483 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.404587 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.498218 4953 generic.go:334] "Generic (PLEG): container finished" podID="b59193de-17ea-458e-9569-6881173e66e8" containerID="570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442" exitCode=0 Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.498321 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" event={"ID":"b59193de-17ea-458e-9569-6881173e66e8","Type":"ContainerDied","Data":"570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442"} Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.501185 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" event={"ID":"fca7afa5-1274-436d-ab61-9e8796e4774c","Type":"ContainerStarted","Data":"8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418"} Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.501323 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" event={"ID":"fca7afa5-1274-436d-ab61-9e8796e4774c","Type":"ContainerStarted","Data":"78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08"} Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.504119 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pxzfb" event={"ID":"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5","Type":"ContainerStarted","Data":"c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371"} Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.507841 4953 generic.go:334] "Generic (PLEG): container finished" podID="5937f1d2-1966-4337-b099-ad0af539fe11" containerID="41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe" exitCode=0 Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.507944 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerDied","Data":"41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe"} Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.508118 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerStarted","Data":"e84414805283e28d3634d2810893b8d68514dcdab14288a9712f3c6c8ea6ed54"} Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.515355 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.527228 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.552990 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.568762 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.606298 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.644244 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.683982 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.727387 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.772667 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.809866 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.818133 4953 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.834489 4953 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.848429 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.856264 4953 csr.go:261] certificate signing request csr-drbmt is approved, waiting to be issued Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.865646 4953 csr.go:257] certificate signing request csr-drbmt is issued Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.899611 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.941237 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:07 crc kubenswrapper[4953]: I0223 00:07:07.995866 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.031994 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.056528 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.085532 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.124783 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.168148 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.205642 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.245486 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.249523 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 21:14:03.947907589 +0000 UTC Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.286865 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.326038 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.326369 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:08 crc kubenswrapper[4953]: E0223 00:07:08.326520 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.372281 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.410584 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.445130 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.496528 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.520225 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" event={"ID":"b59193de-17ea-458e-9569-6881173e66e8","Type":"ContainerStarted","Data":"769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e"} Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.537758 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.537857 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerStarted","Data":"484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431"} Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.537931 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerStarted","Data":"e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0"} Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.537944 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerStarted","Data":"f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a"} Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.537955 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerStarted","Data":"eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d"} Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.537971 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerStarted","Data":"c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2"} Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.537980 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerStarted","Data":"030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a"} Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.565750 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.609705 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.650443 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.691639 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.733457 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.763857 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.802968 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.846841 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.866628 4953 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-23 00:02:07 +0000 UTC, rotation deadline is 2026-11-30 03:06:43.09019204 +0000 UTC Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.866697 4953 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6722h59m34.22350195s for next certificate rotation Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.886175 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.928244 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.966224 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:08Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.991414 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:07:08 crc kubenswrapper[4953]: E0223 00:07:08.991661 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:07:12.991622298 +0000 UTC m=+30.925464144 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.991723 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.991764 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.991801 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:08 crc kubenswrapper[4953]: I0223 00:07:08.991833 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:08 crc kubenswrapper[4953]: E0223 00:07:08.991917 4953 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 00:07:08 crc kubenswrapper[4953]: E0223 00:07:08.991974 4953 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 00:07:08 crc kubenswrapper[4953]: E0223 00:07:08.991981 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:12.991970837 +0000 UTC m=+30.925812683 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 00:07:08 crc kubenswrapper[4953]: E0223 00:07:08.992034 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:12.992022418 +0000 UTC m=+30.925864264 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 00:07:08 crc kubenswrapper[4953]: E0223 00:07:08.992043 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 00:07:08 crc kubenswrapper[4953]: E0223 00:07:08.992076 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 00:07:08 crc kubenswrapper[4953]: E0223 00:07:08.992094 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 00:07:08 crc kubenswrapper[4953]: E0223 00:07:08.992118 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 00:07:08 crc kubenswrapper[4953]: E0223 00:07:08.992126 4953 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:08 crc kubenswrapper[4953]: E0223 00:07:08.992143 4953 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:08 crc kubenswrapper[4953]: E0223 00:07:08.992232 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:12.992208722 +0000 UTC m=+30.926050758 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:08 crc kubenswrapper[4953]: E0223 00:07:08.992257 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:12.992246513 +0000 UTC m=+30.926088569 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.003918 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.046111 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.085027 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.249860 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 23:42:20.713436494 +0000 UTC Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.325833 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.325939 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:09 crc kubenswrapper[4953]: E0223 00:07:09.326554 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:09 crc kubenswrapper[4953]: E0223 00:07:09.326652 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.544434 4953 generic.go:334] "Generic (PLEG): container finished" podID="b59193de-17ea-458e-9569-6881173e66e8" containerID="769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e" exitCode=0 Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.545095 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" event={"ID":"b59193de-17ea-458e-9569-6881173e66e8","Type":"ContainerDied","Data":"769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e"} Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.547120 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1"} Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.577477 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.598067 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.617329 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.631122 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.648606 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.665024 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.684633 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.702799 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.719322 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.736068 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.748993 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.760713 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.774448 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.789801 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.804337 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.816896 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.835172 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.849965 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.860480 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.889040 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.928082 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:09 crc kubenswrapper[4953]: I0223 00:07:09.975336 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:09Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.012091 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.049250 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.094704 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.127992 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.174949 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.214263 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.250203 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 19:42:31.73024746 +0000 UTC Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.326045 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:10 crc kubenswrapper[4953]: E0223 00:07:10.326829 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.556095 4953 generic.go:334] "Generic (PLEG): container finished" podID="b59193de-17ea-458e-9569-6881173e66e8" containerID="17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705" exitCode=0 Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.556632 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" event={"ID":"b59193de-17ea-458e-9569-6881173e66e8","Type":"ContainerDied","Data":"17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705"} Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.583236 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.602149 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.623538 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.632874 4953 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.636353 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.636407 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.636419 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.636595 4953 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.645180 4953 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.645577 4953 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.647385 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.647425 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.647439 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.647456 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.647478 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:10Z","lastTransitionTime":"2026-02-23T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.648752 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.670208 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: E0223 00:07:10.672088 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.678928 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.679011 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.679026 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.679046 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.679089 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:10Z","lastTransitionTime":"2026-02-23T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.695870 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: E0223 00:07:10.696630 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.712508 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.712567 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.712581 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.712606 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.712622 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:10Z","lastTransitionTime":"2026-02-23T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.716224 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: E0223 00:07:10.728051 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.735566 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.736914 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.737018 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.737034 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.737053 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.737098 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:10Z","lastTransitionTime":"2026-02-23T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:10 crc kubenswrapper[4953]: E0223 00:07:10.750137 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.754936 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.755012 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.755028 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.755051 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.755066 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:10Z","lastTransitionTime":"2026-02-23T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.756775 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: E0223 00:07:10.773802 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: E0223 00:07:10.773976 4953 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.774389 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.776184 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.776232 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.776247 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.776267 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.776281 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:10Z","lastTransitionTime":"2026-02-23T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.794153 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.810530 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.823742 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.839070 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.880982 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.881034 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.881050 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.881117 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.881135 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:10Z","lastTransitionTime":"2026-02-23T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.983601 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.983644 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.983652 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.983667 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:10 crc kubenswrapper[4953]: I0223 00:07:10.983679 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:10Z","lastTransitionTime":"2026-02-23T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.087337 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.087385 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.087394 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.087411 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.087428 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:11Z","lastTransitionTime":"2026-02-23T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.190564 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.190655 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.190683 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.190723 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.190751 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:11Z","lastTransitionTime":"2026-02-23T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.251438 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 07:00:49.337004892 +0000 UTC Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.295759 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.295830 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.295848 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.295874 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.295897 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:11Z","lastTransitionTime":"2026-02-23T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.326022 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.326081 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:11 crc kubenswrapper[4953]: E0223 00:07:11.326279 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:11 crc kubenswrapper[4953]: E0223 00:07:11.327600 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.398445 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.398505 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.398524 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.398551 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.398572 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:11Z","lastTransitionTime":"2026-02-23T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.502644 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.502735 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.502759 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.502799 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.502824 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:11Z","lastTransitionTime":"2026-02-23T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.567149 4953 generic.go:334] "Generic (PLEG): container finished" podID="b59193de-17ea-458e-9569-6881173e66e8" containerID="44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e" exitCode=0 Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.567317 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" event={"ID":"b59193de-17ea-458e-9569-6881173e66e8","Type":"ContainerDied","Data":"44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e"} Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.574151 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerStarted","Data":"f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0"} Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.589367 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.606084 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.606152 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.606170 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.606196 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.606214 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:11Z","lastTransitionTime":"2026-02-23T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.610719 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.628620 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.653317 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.673901 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.689257 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.708917 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.709833 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.709863 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.709875 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.709896 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.709911 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:11Z","lastTransitionTime":"2026-02-23T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.732179 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.760257 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.795556 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.812647 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.812693 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.812703 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.812721 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.812733 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:11Z","lastTransitionTime":"2026-02-23T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.820154 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.841098 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.863844 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.881019 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:11Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.915997 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.916064 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.916091 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.916125 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:11 crc kubenswrapper[4953]: I0223 00:07:11.916148 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:11Z","lastTransitionTime":"2026-02-23T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.020143 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.020207 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.020228 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.020257 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.020280 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:12Z","lastTransitionTime":"2026-02-23T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.125370 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.125459 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.125486 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.125522 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.125547 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:12Z","lastTransitionTime":"2026-02-23T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.229387 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.229466 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.229490 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.229524 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.229548 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:12Z","lastTransitionTime":"2026-02-23T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.251825 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 14:09:09.563414446 +0000 UTC Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.325732 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:12 crc kubenswrapper[4953]: E0223 00:07:12.326002 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.333805 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.333869 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.333917 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.333949 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.333978 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:12Z","lastTransitionTime":"2026-02-23T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.437820 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.437902 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.437927 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.438001 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.438027 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:12Z","lastTransitionTime":"2026-02-23T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.541902 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.542014 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.542042 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.542074 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.542098 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:12Z","lastTransitionTime":"2026-02-23T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.584386 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" event={"ID":"b59193de-17ea-458e-9569-6881173e66e8","Type":"ContainerStarted","Data":"fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa"} Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.607569 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.631321 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.646107 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.646174 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.646193 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.646222 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.646241 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:12Z","lastTransitionTime":"2026-02-23T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.655064 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.674960 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.695505 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.726201 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.750318 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.750407 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.750427 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.750458 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.750178 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.750480 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:12Z","lastTransitionTime":"2026-02-23T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.771393 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.788908 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.804266 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.824593 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.845258 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.853845 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.853932 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.853957 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.853987 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.854007 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:12Z","lastTransitionTime":"2026-02-23T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.869685 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.892996 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.939469 4953 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.971543 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.971596 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.971608 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.971630 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:12 crc kubenswrapper[4953]: I0223 00:07:12.971645 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:12Z","lastTransitionTime":"2026-02-23T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.040574 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:07:13 crc kubenswrapper[4953]: E0223 00:07:13.040806 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:07:21.040761359 +0000 UTC m=+38.974603245 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.041172 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.041224 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.041257 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.041281 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:13 crc kubenswrapper[4953]: E0223 00:07:13.041463 4953 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 00:07:13 crc kubenswrapper[4953]: E0223 00:07:13.041524 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 00:07:13 crc kubenswrapper[4953]: E0223 00:07:13.041558 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 00:07:13 crc kubenswrapper[4953]: E0223 00:07:13.041559 4953 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 00:07:13 crc kubenswrapper[4953]: E0223 00:07:13.041603 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:21.041571128 +0000 UTC m=+38.975413004 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 00:07:13 crc kubenswrapper[4953]: E0223 00:07:13.041486 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 00:07:13 crc kubenswrapper[4953]: E0223 00:07:13.041688 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:21.04165881 +0000 UTC m=+38.975500666 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 00:07:13 crc kubenswrapper[4953]: E0223 00:07:13.041695 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 00:07:13 crc kubenswrapper[4953]: E0223 00:07:13.041715 4953 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:13 crc kubenswrapper[4953]: E0223 00:07:13.041582 4953 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:13 crc kubenswrapper[4953]: E0223 00:07:13.041796 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:21.041773143 +0000 UTC m=+38.975614999 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:13 crc kubenswrapper[4953]: E0223 00:07:13.041903 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:21.041891286 +0000 UTC m=+38.975733142 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.079185 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.079224 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.079236 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.079256 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.079270 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:13Z","lastTransitionTime":"2026-02-23T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.182980 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.183054 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.183081 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.183114 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.183140 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:13Z","lastTransitionTime":"2026-02-23T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.252227 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 12:31:03.479248757 +0000 UTC Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.286914 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.286986 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.287011 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.287046 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.287071 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:13Z","lastTransitionTime":"2026-02-23T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.325884 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.325973 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:13 crc kubenswrapper[4953]: E0223 00:07:13.326130 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:13 crc kubenswrapper[4953]: E0223 00:07:13.326465 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.352169 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.372591 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.390431 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.390498 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.390515 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.390542 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.390558 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:13Z","lastTransitionTime":"2026-02-23T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.395505 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.418184 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.440075 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.464332 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.492328 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.493921 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.493972 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.493990 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.494019 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.494040 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:13Z","lastTransitionTime":"2026-02-23T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.519643 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.548449 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.573476 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.594474 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerStarted","Data":"4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7"} Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.595275 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.595511 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.596921 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.596964 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.596978 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.596999 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.597012 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:13Z","lastTransitionTime":"2026-02-23T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.602411 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.623199 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.630140 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.631325 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.637544 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.638421 4953 scope.go:117] "RemoveContainer" containerID="29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b" Feb 23 00:07:13 crc kubenswrapper[4953]: E0223 00:07:13.638612 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.642829 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.662415 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.678895 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.693224 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.699182 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.699244 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.699257 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.699304 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.699319 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:13Z","lastTransitionTime":"2026-02-23T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.708265 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.722582 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.737532 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.753410 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.773928 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.802116 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.802178 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.802191 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.802213 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.802225 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:13Z","lastTransitionTime":"2026-02-23T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.802026 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.825611 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.873201 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.897681 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.904936 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.904996 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.905015 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.905038 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.905056 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:13Z","lastTransitionTime":"2026-02-23T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.911604 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.925077 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:13 crc kubenswrapper[4953]: I0223 00:07:13.938857 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.008573 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.008613 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.008624 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.008645 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.008658 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:14Z","lastTransitionTime":"2026-02-23T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.111999 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.112044 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.112059 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.112084 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.112099 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:14Z","lastTransitionTime":"2026-02-23T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.215366 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.215433 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.215453 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.215487 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.215513 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:14Z","lastTransitionTime":"2026-02-23T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.252674 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 08:13:39.732012514 +0000 UTC Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.318896 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.318929 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.318939 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.318961 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.318973 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:14Z","lastTransitionTime":"2026-02-23T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.325704 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:14 crc kubenswrapper[4953]: E0223 00:07:14.325834 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.422733 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.422789 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.422807 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.422831 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.422848 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:14Z","lastTransitionTime":"2026-02-23T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.527460 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.527516 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.527533 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.527562 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.527581 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:14Z","lastTransitionTime":"2026-02-23T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.605800 4953 generic.go:334] "Generic (PLEG): container finished" podID="b59193de-17ea-458e-9569-6881173e66e8" containerID="fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa" exitCode=0 Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.605885 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" event={"ID":"b59193de-17ea-458e-9569-6881173e66e8","Type":"ContainerDied","Data":"fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa"} Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.606251 4953 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.633516 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.635402 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.635452 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.635471 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.635496 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.635513 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:14Z","lastTransitionTime":"2026-02-23T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.652648 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.669792 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.690607 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.712169 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.729589 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.737555 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.737619 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.737643 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.737669 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.737690 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:14Z","lastTransitionTime":"2026-02-23T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.746900 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.768816 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.794077 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.815860 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.831160 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.840848 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.840884 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.840897 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.840915 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.840929 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:14Z","lastTransitionTime":"2026-02-23T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.845576 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.858404 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.875941 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.943280 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.943328 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.943336 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.943356 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:14 crc kubenswrapper[4953]: I0223 00:07:14.943365 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:14Z","lastTransitionTime":"2026-02-23T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.046329 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.046415 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.046434 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.046462 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.046488 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:15Z","lastTransitionTime":"2026-02-23T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.150453 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.150507 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.150525 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.150550 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.150570 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:15Z","lastTransitionTime":"2026-02-23T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.252910 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 23:55:23.515322938 +0000 UTC Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.255346 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.255439 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.255464 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.255571 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.255613 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:15Z","lastTransitionTime":"2026-02-23T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.327599 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.327776 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:15 crc kubenswrapper[4953]: E0223 00:07:15.328008 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:15 crc kubenswrapper[4953]: E0223 00:07:15.328330 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.361336 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.361383 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.361395 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.361416 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.361430 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:15Z","lastTransitionTime":"2026-02-23T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.465216 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.465681 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.465824 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.466101 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.466221 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:15Z","lastTransitionTime":"2026-02-23T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.570433 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.570487 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.570497 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.570517 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.570529 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:15Z","lastTransitionTime":"2026-02-23T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.621217 4953 generic.go:334] "Generic (PLEG): container finished" podID="b59193de-17ea-458e-9569-6881173e66e8" containerID="3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2" exitCode=0 Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.621353 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" event={"ID":"b59193de-17ea-458e-9569-6881173e66e8","Type":"ContainerDied","Data":"3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2"} Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.621487 4953 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.643656 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.667940 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.673648 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.673710 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.673727 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.673751 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.673765 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:15Z","lastTransitionTime":"2026-02-23T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.686634 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.707552 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.724078 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.742963 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.757017 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.772542 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.776887 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.776944 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.776957 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.776979 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.776994 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:15Z","lastTransitionTime":"2026-02-23T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.788941 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.807262 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.836939 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.854385 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.869931 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.879670 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.879707 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.879719 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.879741 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.879751 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:15Z","lastTransitionTime":"2026-02-23T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.885869 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:15Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.982840 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.982902 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.982914 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.982934 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:15 crc kubenswrapper[4953]: I0223 00:07:15.982949 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:15Z","lastTransitionTime":"2026-02-23T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.085196 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.085273 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.085327 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.085365 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.085387 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:16Z","lastTransitionTime":"2026-02-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.188628 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.188683 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.188696 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.188719 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.188732 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:16Z","lastTransitionTime":"2026-02-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.253636 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 22:11:31.039657465 +0000 UTC Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.298753 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.298805 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.298816 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.298833 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.298846 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:16Z","lastTransitionTime":"2026-02-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.326341 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:16 crc kubenswrapper[4953]: E0223 00:07:16.326501 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.402633 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.402693 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.402706 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.402726 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.402742 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:16Z","lastTransitionTime":"2026-02-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.506451 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.506517 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.506535 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.506563 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.506584 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:16Z","lastTransitionTime":"2026-02-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.610145 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.610218 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.610237 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.610272 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.610317 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:16Z","lastTransitionTime":"2026-02-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.631276 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" event={"ID":"b59193de-17ea-458e-9569-6881173e66e8","Type":"ContainerStarted","Data":"03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d"} Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.659127 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.680163 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.696750 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.714447 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.714526 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.714568 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.714615 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.714645 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:16Z","lastTransitionTime":"2026-02-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.719728 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.750980 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.774378 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.796829 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.818537 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.818630 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.818652 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.818681 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.818700 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:16Z","lastTransitionTime":"2026-02-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.819542 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.838204 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.854655 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.876647 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.893933 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.911992 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.921241 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.921298 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.921313 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.921338 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.921350 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:16Z","lastTransitionTime":"2026-02-23T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:16 crc kubenswrapper[4953]: I0223 00:07:16.940106 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.024169 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.024199 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.024208 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.024221 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.024230 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:17Z","lastTransitionTime":"2026-02-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.127452 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.127494 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.127506 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.127529 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.127541 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:17Z","lastTransitionTime":"2026-02-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.229692 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.229730 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.229739 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.229753 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.229763 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:17Z","lastTransitionTime":"2026-02-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.253869 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:05:04.571698451 +0000 UTC Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.325428 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.325458 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:17 crc kubenswrapper[4953]: E0223 00:07:17.325640 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:17 crc kubenswrapper[4953]: E0223 00:07:17.325687 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.332445 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.332505 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.332521 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.332536 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.332550 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:17Z","lastTransitionTime":"2026-02-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.436599 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.437190 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.437401 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.437593 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.437736 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:17Z","lastTransitionTime":"2026-02-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.541269 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.541416 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.541436 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.541461 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.541475 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:17Z","lastTransitionTime":"2026-02-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.643657 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.643755 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.643787 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.643828 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.643861 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:17Z","lastTransitionTime":"2026-02-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.748571 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.749177 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.749196 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.749217 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.749230 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:17Z","lastTransitionTime":"2026-02-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.856702 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.856767 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.856779 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.856799 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.856814 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:17Z","lastTransitionTime":"2026-02-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.959878 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.959923 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.959935 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.959949 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:17 crc kubenswrapper[4953]: I0223 00:07:17.959959 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:17Z","lastTransitionTime":"2026-02-23T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.062680 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.062769 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.062791 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.062819 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.062838 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:18Z","lastTransitionTime":"2026-02-23T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.165942 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.166005 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.166019 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.166040 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.166053 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:18Z","lastTransitionTime":"2026-02-23T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.254478 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 05:55:09.297254142 +0000 UTC Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.269100 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.269143 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.269152 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.269174 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.269185 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:18Z","lastTransitionTime":"2026-02-23T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.326188 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:18 crc kubenswrapper[4953]: E0223 00:07:18.326418 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.372050 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.372092 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.372107 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.372131 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.372146 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:18Z","lastTransitionTime":"2026-02-23T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.475333 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.475406 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.475421 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.475452 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.475469 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:18Z","lastTransitionTime":"2026-02-23T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.578365 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.578419 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.578436 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.578460 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.578476 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:18Z","lastTransitionTime":"2026-02-23T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.641103 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovnkube-controller/0.log" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.645767 4953 generic.go:334] "Generic (PLEG): container finished" podID="5937f1d2-1966-4337-b099-ad0af539fe11" containerID="4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7" exitCode=1 Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.645838 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerDied","Data":"4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7"} Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.646853 4953 scope.go:117] "RemoveContainer" containerID="4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.669424 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.681820 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.681902 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.681922 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.681953 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.681974 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:18Z","lastTransitionTime":"2026-02-23T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.691426 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.712364 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.732104 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.752189 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.765783 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.785040 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.785537 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.785596 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.785612 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.785633 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.785643 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:18Z","lastTransitionTime":"2026-02-23T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.806100 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.820353 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.835888 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.855886 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.880423 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:18Z\\\",\\\"message\\\":\\\".com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 00:07:17.866603 6218 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.866871 6218 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 00:07:17.867043 6218 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 00:07:17.867319 6218 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.867653 6218 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.867705 6218 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 00:07:17.868277 6218 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 00:07:17.868437 6218 factory.go:656] Stopping watch factory\\\\nI0223 00:07:17.868499 6218 ovnkube.go:599] Stopped ovnkube\\\\nI0223 00:07:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.889447 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.889506 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.889529 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.889564 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.889590 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:18Z","lastTransitionTime":"2026-02-23T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.904482 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.920557 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.992934 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.993024 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.993048 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.993076 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:18 crc kubenswrapper[4953]: I0223 00:07:18.993095 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:18Z","lastTransitionTime":"2026-02-23T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.096256 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.096326 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.096339 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.096359 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.096371 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:19Z","lastTransitionTime":"2026-02-23T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.150966 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b"] Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.152281 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.156072 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.157645 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.180855 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.194416 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.199343 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.199391 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.199403 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.199426 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.199441 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:19Z","lastTransitionTime":"2026-02-23T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.210483 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f40483cd-6612-4eee-83e1-2a0972311b26-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5jz4b\" (UID: \"f40483cd-6612-4eee-83e1-2a0972311b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.210533 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f40483cd-6612-4eee-83e1-2a0972311b26-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5jz4b\" (UID: \"f40483cd-6612-4eee-83e1-2a0972311b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.210605 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knsx2\" (UniqueName: \"kubernetes.io/projected/f40483cd-6612-4eee-83e1-2a0972311b26-kube-api-access-knsx2\") pod \"ovnkube-control-plane-749d76644c-5jz4b\" (UID: \"f40483cd-6612-4eee-83e1-2a0972311b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.210649 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f40483cd-6612-4eee-83e1-2a0972311b26-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5jz4b\" (UID: \"f40483cd-6612-4eee-83e1-2a0972311b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.225265 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.245312 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.255612 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 08:47:36.401532724 +0000 UTC Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.265325 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.282604 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.303619 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.303667 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.303679 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.303700 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.303713 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:19Z","lastTransitionTime":"2026-02-23T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.303693 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.311208 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f40483cd-6612-4eee-83e1-2a0972311b26-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5jz4b\" (UID: \"f40483cd-6612-4eee-83e1-2a0972311b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.311264 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knsx2\" (UniqueName: \"kubernetes.io/projected/f40483cd-6612-4eee-83e1-2a0972311b26-kube-api-access-knsx2\") pod \"ovnkube-control-plane-749d76644c-5jz4b\" (UID: \"f40483cd-6612-4eee-83e1-2a0972311b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.311310 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f40483cd-6612-4eee-83e1-2a0972311b26-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5jz4b\" (UID: \"f40483cd-6612-4eee-83e1-2a0972311b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.311333 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f40483cd-6612-4eee-83e1-2a0972311b26-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5jz4b\" (UID: \"f40483cd-6612-4eee-83e1-2a0972311b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.311941 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f40483cd-6612-4eee-83e1-2a0972311b26-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5jz4b\" (UID: \"f40483cd-6612-4eee-83e1-2a0972311b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.312257 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f40483cd-6612-4eee-83e1-2a0972311b26-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5jz4b\" (UID: \"f40483cd-6612-4eee-83e1-2a0972311b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.318689 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f40483cd-6612-4eee-83e1-2a0972311b26-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5jz4b\" (UID: \"f40483cd-6612-4eee-83e1-2a0972311b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.323563 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:18Z\\\",\\\"message\\\":\\\".com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 00:07:17.866603 6218 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.866871 6218 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 00:07:17.867043 6218 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 00:07:17.867319 6218 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.867653 6218 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.867705 6218 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 00:07:17.868277 6218 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 00:07:17.868437 6218 factory.go:656] Stopping watch factory\\\\nI0223 00:07:17.868499 6218 ovnkube.go:599] Stopped ovnkube\\\\nI0223 00:07:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.325581 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.325608 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:19 crc kubenswrapper[4953]: E0223 00:07:19.325762 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:19 crc kubenswrapper[4953]: E0223 00:07:19.325851 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.330824 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knsx2\" (UniqueName: \"kubernetes.io/projected/f40483cd-6612-4eee-83e1-2a0972311b26-kube-api-access-knsx2\") pod \"ovnkube-control-plane-749d76644c-5jz4b\" (UID: \"f40483cd-6612-4eee-83e1-2a0972311b26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.341536 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.357593 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.371050 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.387165 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.400586 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.406495 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.406745 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.406879 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.406985 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.407076 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:19Z","lastTransitionTime":"2026-02-23T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.417069 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.431038 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.475959 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" Feb 23 00:07:19 crc kubenswrapper[4953]: W0223 00:07:19.488259 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf40483cd_6612_4eee_83e1_2a0972311b26.slice/crio-d75747b89fcd267f7d030ebf5af86fe17d842d266c79ce7bf15345a8c6a076f6 WatchSource:0}: Error finding container d75747b89fcd267f7d030ebf5af86fe17d842d266c79ce7bf15345a8c6a076f6: Status 404 returned error can't find the container with id d75747b89fcd267f7d030ebf5af86fe17d842d266c79ce7bf15345a8c6a076f6 Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.510916 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.510969 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.510981 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.511006 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.511020 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:19Z","lastTransitionTime":"2026-02-23T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.614874 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.614937 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.614953 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.614980 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.614996 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:19Z","lastTransitionTime":"2026-02-23T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.656647 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovnkube-controller/0.log" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.660012 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerStarted","Data":"770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a"} Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.660156 4953 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.662273 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" event={"ID":"f40483cd-6612-4eee-83e1-2a0972311b26","Type":"ContainerStarted","Data":"d75747b89fcd267f7d030ebf5af86fe17d842d266c79ce7bf15345a8c6a076f6"} Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.683663 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:18Z\\\",\\\"message\\\":\\\".com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 00:07:17.866603 6218 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.866871 6218 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 00:07:17.867043 6218 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 00:07:17.867319 6218 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.867653 6218 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.867705 6218 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 00:07:17.868277 6218 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 00:07:17.868437 6218 factory.go:656] Stopping watch factory\\\\nI0223 00:07:17.868499 6218 ovnkube.go:599] Stopped ovnkube\\\\nI0223 00:07:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.706654 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.717577 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.718132 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.718146 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.718169 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.718185 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:19Z","lastTransitionTime":"2026-02-23T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.726347 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.743010 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.757763 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.771591 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.786441 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.801613 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.814334 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.820913 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.820989 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.821007 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.821032 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.821070 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:19Z","lastTransitionTime":"2026-02-23T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.827172 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.841974 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.856338 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.871100 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.890604 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.909496 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:19Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.923680 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.923729 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.923743 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.923768 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:19 crc kubenswrapper[4953]: I0223 00:07:19.923783 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:19Z","lastTransitionTime":"2026-02-23T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.026650 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.026695 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.026708 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.026726 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.026737 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:20Z","lastTransitionTime":"2026-02-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.130356 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.130395 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.130408 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.130425 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.130437 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:20Z","lastTransitionTime":"2026-02-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.233091 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.233171 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.233195 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.233229 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.233249 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:20Z","lastTransitionTime":"2026-02-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.256231 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:48:59.98642247 +0000 UTC Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.278651 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-wppgs"] Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.279171 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:20 crc kubenswrapper[4953]: E0223 00:07:20.279236 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.292129 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.307992 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.322869 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs\") pod \"network-metrics-daemon-wppgs\" (UID: \"71837ac6-9a75-4640-af98-633ccdd09e20\") " pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.323014 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxbvm\" (UniqueName: \"kubernetes.io/projected/71837ac6-9a75-4640-af98-633ccdd09e20-kube-api-access-wxbvm\") pod \"network-metrics-daemon-wppgs\" (UID: \"71837ac6-9a75-4640-af98-633ccdd09e20\") " pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.325608 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:20 crc kubenswrapper[4953]: E0223 00:07:20.325749 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.327642 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.336676 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.336741 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.336758 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.336782 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.336796 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:20Z","lastTransitionTime":"2026-02-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.345167 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.367354 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:18Z\\\",\\\"message\\\":\\\".com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 00:07:17.866603 6218 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.866871 6218 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 00:07:17.867043 6218 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 00:07:17.867319 6218 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.867653 6218 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.867705 6218 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 00:07:17.868277 6218 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 00:07:17.868437 6218 factory.go:656] Stopping watch factory\\\\nI0223 00:07:17.868499 6218 ovnkube.go:599] Stopped ovnkube\\\\nI0223 00:07:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.382597 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.402902 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.420514 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.423841 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs\") pod \"network-metrics-daemon-wppgs\" (UID: \"71837ac6-9a75-4640-af98-633ccdd09e20\") " pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.424021 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxbvm\" (UniqueName: \"kubernetes.io/projected/71837ac6-9a75-4640-af98-633ccdd09e20-kube-api-access-wxbvm\") pod \"network-metrics-daemon-wppgs\" (UID: \"71837ac6-9a75-4640-af98-633ccdd09e20\") " pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:20 crc kubenswrapper[4953]: E0223 00:07:20.424077 4953 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 00:07:20 crc kubenswrapper[4953]: E0223 00:07:20.424210 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs podName:71837ac6-9a75-4640-af98-633ccdd09e20 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:20.924174211 +0000 UTC m=+38.858016097 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs") pod "network-metrics-daemon-wppgs" (UID: "71837ac6-9a75-4640-af98-633ccdd09e20") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.438512 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.440344 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.440389 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.440398 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.440416 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.440427 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:20Z","lastTransitionTime":"2026-02-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.448055 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxbvm\" (UniqueName: \"kubernetes.io/projected/71837ac6-9a75-4640-af98-633ccdd09e20-kube-api-access-wxbvm\") pod \"network-metrics-daemon-wppgs\" (UID: \"71837ac6-9a75-4640-af98-633ccdd09e20\") " pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.454250 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.500587 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.535829 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.542952 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.543022 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.543038 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.543066 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.543082 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:20Z","lastTransitionTime":"2026-02-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.554250 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.569260 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.583057 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.599757 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.646016 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.646070 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.646078 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.646094 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.646107 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:20Z","lastTransitionTime":"2026-02-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.669478 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" event={"ID":"f40483cd-6612-4eee-83e1-2a0972311b26","Type":"ContainerStarted","Data":"6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063"} Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.669530 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" event={"ID":"f40483cd-6612-4eee-83e1-2a0972311b26","Type":"ContainerStarted","Data":"4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341"} Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.672063 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovnkube-controller/1.log" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.673320 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovnkube-controller/0.log" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.676779 4953 generic.go:334] "Generic (PLEG): container finished" podID="5937f1d2-1966-4337-b099-ad0af539fe11" containerID="770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a" exitCode=1 Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.676855 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerDied","Data":"770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a"} Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.676924 4953 scope.go:117] "RemoveContainer" containerID="4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.678574 4953 scope.go:117] "RemoveContainer" containerID="770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a" Feb 23 00:07:20 crc kubenswrapper[4953]: E0223 00:07:20.678932 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-69mr8_openshift-ovn-kubernetes(5937f1d2-1966-4337-b099-ad0af539fe11)\"" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.695271 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.712363 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.729850 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.743425 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.749179 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.749257 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.749276 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.749333 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.749352 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:20Z","lastTransitionTime":"2026-02-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.757921 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.770016 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.781486 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.796119 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.816058 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.839013 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:18Z\\\",\\\"message\\\":\\\".com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 00:07:17.866603 6218 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.866871 6218 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 00:07:17.867043 6218 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 00:07:17.867319 6218 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.867653 6218 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.867705 6218 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 00:07:17.868277 6218 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 00:07:17.868437 6218 factory.go:656] Stopping watch factory\\\\nI0223 00:07:17.868499 6218 ovnkube.go:599] Stopped ovnkube\\\\nI0223 00:07:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.851932 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.852326 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.852391 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.852418 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.852454 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.852480 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:20Z","lastTransitionTime":"2026-02-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.870912 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.888549 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.905195 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.921652 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.930400 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs\") pod \"network-metrics-daemon-wppgs\" (UID: \"71837ac6-9a75-4640-af98-633ccdd09e20\") " pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:20 crc kubenswrapper[4953]: E0223 00:07:20.930520 4953 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 00:07:20 crc kubenswrapper[4953]: E0223 00:07:20.930572 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs podName:71837ac6-9a75-4640-af98-633ccdd09e20 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:21.930557385 +0000 UTC m=+39.864399231 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs") pod "network-metrics-daemon-wppgs" (UID: "71837ac6-9a75-4640-af98-633ccdd09e20") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.939746 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.956247 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.956307 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.956320 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.956344 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.956360 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:20Z","lastTransitionTime":"2026-02-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.959072 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.980122 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.984237 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.984326 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.984353 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.984383 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:20 crc kubenswrapper[4953]: I0223 00:07:20.984403 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:20Z","lastTransitionTime":"2026-02-23T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:20.999786 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.000034 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.005846 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.005959 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.006010 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.006050 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.006075 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:21Z","lastTransitionTime":"2026-02-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.023895 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.031450 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.036826 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.036886 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.036903 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.036928 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.036942 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:21Z","lastTransitionTime":"2026-02-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.049187 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.054635 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.054687 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.054705 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.054727 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.054742 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:21Z","lastTransitionTime":"2026-02-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.057339 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:18Z\\\",\\\"message\\\":\\\".com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 00:07:17.866603 6218 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.866871 6218 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 00:07:17.867043 6218 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 00:07:17.867319 6218 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.867653 6218 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.867705 6218 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 00:07:17.868277 6218 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 00:07:17.868437 6218 factory.go:656] Stopping watch factory\\\\nI0223 00:07:17.868499 6218 ovnkube.go:599] Stopped ovnkube\\\\nI0223 00:07:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"message\\\":\\\"_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 00:07:19.656901 6407 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0223 00:07:19.656916 6407 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0223 00:07:19.656924 6407 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.070861 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.073755 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.077849 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.077904 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.077927 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.077954 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.077973 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:21Z","lastTransitionTime":"2026-02-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.087750 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.093976 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.094175 4953 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.096552 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.096598 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.096614 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.096638 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.096654 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:21Z","lastTransitionTime":"2026-02-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.105212 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.122419 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.131987 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.132190 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.132256 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.132341 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:07:37.132306347 +0000 UTC m=+55.066148193 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.132372 4953 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.132402 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.132500 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:37.13247034 +0000 UTC m=+55.066312196 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.132524 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.132566 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.132593 4953 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.132598 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.132668 4953 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.132690 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:37.132655855 +0000 UTC m=+55.066497881 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.132735 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:37.132721716 +0000 UTC m=+55.066563582 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.132789 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.132809 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.132826 4953 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.132871 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:37.13286124 +0000 UTC m=+55.066703096 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.136754 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.153446 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.170085 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.182175 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.193443 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.200725 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.201581 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.201642 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.201685 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.201713 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:21Z","lastTransitionTime":"2026-02-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.213510 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.229174 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:21Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.256643 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 08:15:37.926402241 +0000 UTC Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.306445 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.306519 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.306540 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.306575 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.306598 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:21Z","lastTransitionTime":"2026-02-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.326112 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.326127 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.326361 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.326516 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.410988 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.411059 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.411079 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.411109 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.411131 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:21Z","lastTransitionTime":"2026-02-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.544631 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.544714 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.544735 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.544766 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.544786 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:21Z","lastTransitionTime":"2026-02-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.648021 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.648077 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.648092 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.648110 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.648121 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:21Z","lastTransitionTime":"2026-02-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.683838 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovnkube-controller/1.log" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.750719 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.750795 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.750816 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.750846 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.750869 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:21Z","lastTransitionTime":"2026-02-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.860713 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.860822 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.860852 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.860891 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.860920 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:21Z","lastTransitionTime":"2026-02-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.942826 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs\") pod \"network-metrics-daemon-wppgs\" (UID: \"71837ac6-9a75-4640-af98-633ccdd09e20\") " pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.943173 4953 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 00:07:21 crc kubenswrapper[4953]: E0223 00:07:21.943786 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs podName:71837ac6-9a75-4640-af98-633ccdd09e20 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:23.94374793 +0000 UTC m=+41.877589806 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs") pod "network-metrics-daemon-wppgs" (UID: "71837ac6-9a75-4640-af98-633ccdd09e20") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.964093 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.964177 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.964200 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.964231 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:21 crc kubenswrapper[4953]: I0223 00:07:21.964255 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:21Z","lastTransitionTime":"2026-02-23T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.067357 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.067515 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.067601 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.067626 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.067646 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:22Z","lastTransitionTime":"2026-02-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.170540 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.170620 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.170639 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.170671 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.170692 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:22Z","lastTransitionTime":"2026-02-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.257683 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 07:10:39.910516684 +0000 UTC Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.274268 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.274322 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.274332 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.274345 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.274357 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:22Z","lastTransitionTime":"2026-02-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.325890 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.325937 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:22 crc kubenswrapper[4953]: E0223 00:07:22.326067 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:22 crc kubenswrapper[4953]: E0223 00:07:22.326210 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.377091 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.377253 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.377276 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.377330 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.377349 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:22Z","lastTransitionTime":"2026-02-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.480145 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.480199 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.480211 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.480238 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.480251 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:22Z","lastTransitionTime":"2026-02-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.583678 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.583724 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.583737 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.583760 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.583774 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:22Z","lastTransitionTime":"2026-02-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.686625 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.686671 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.686682 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.686698 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.686711 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:22Z","lastTransitionTime":"2026-02-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.789631 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.789669 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.789680 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.789696 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.789709 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:22Z","lastTransitionTime":"2026-02-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.893620 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.893674 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.893689 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.893710 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.893723 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:22Z","lastTransitionTime":"2026-02-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.996872 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.996924 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.996955 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.996975 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:22 crc kubenswrapper[4953]: I0223 00:07:22.996986 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:22Z","lastTransitionTime":"2026-02-23T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.099456 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.099496 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.099506 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.099521 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.099532 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:23Z","lastTransitionTime":"2026-02-23T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.203199 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.203318 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.203352 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.203394 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.203511 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:23Z","lastTransitionTime":"2026-02-23T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.258628 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 16:54:46.086016083 +0000 UTC Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.307504 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.307581 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.307598 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.307626 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.307646 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:23Z","lastTransitionTime":"2026-02-23T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.325955 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:23 crc kubenswrapper[4953]: E0223 00:07:23.326122 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.326556 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:23 crc kubenswrapper[4953]: E0223 00:07:23.326775 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.349584 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.368927 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.386794 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.400813 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.410352 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.410385 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.410394 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.410409 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.410420 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:23Z","lastTransitionTime":"2026-02-23T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.414368 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.438114 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.459697 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.479957 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.506274 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.514783 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.514834 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.514847 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.514868 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.514881 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:23Z","lastTransitionTime":"2026-02-23T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.536802 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:18Z\\\",\\\"message\\\":\\\".com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 00:07:17.866603 6218 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.866871 6218 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 00:07:17.867043 6218 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 00:07:17.867319 6218 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.867653 6218 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.867705 6218 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 00:07:17.868277 6218 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 00:07:17.868437 6218 factory.go:656] Stopping watch factory\\\\nI0223 00:07:17.868499 6218 ovnkube.go:599] Stopped ovnkube\\\\nI0223 00:07:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"message\\\":\\\"_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 00:07:19.656901 6407 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0223 00:07:19.656916 6407 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0223 00:07:19.656924 6407 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.554531 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.577972 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.595906 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.612630 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.617387 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.617438 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.617456 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.617476 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.617495 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:23Z","lastTransitionTime":"2026-02-23T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.627391 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.642008 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.721248 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.721367 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.721383 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.721405 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.721424 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:23Z","lastTransitionTime":"2026-02-23T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.823946 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.824040 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.824066 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.824103 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.824130 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:23Z","lastTransitionTime":"2026-02-23T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.927251 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.927350 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.927366 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.927388 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.927402 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:23Z","lastTransitionTime":"2026-02-23T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:23 crc kubenswrapper[4953]: I0223 00:07:23.968367 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs\") pod \"network-metrics-daemon-wppgs\" (UID: \"71837ac6-9a75-4640-af98-633ccdd09e20\") " pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:23 crc kubenswrapper[4953]: E0223 00:07:23.968643 4953 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 00:07:23 crc kubenswrapper[4953]: E0223 00:07:23.968783 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs podName:71837ac6-9a75-4640-af98-633ccdd09e20 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:27.968747241 +0000 UTC m=+45.902589117 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs") pod "network-metrics-daemon-wppgs" (UID: "71837ac6-9a75-4640-af98-633ccdd09e20") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.031622 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.031678 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.031696 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.031726 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.031743 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:24Z","lastTransitionTime":"2026-02-23T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.136113 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.136174 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.136185 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.136205 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.136217 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:24Z","lastTransitionTime":"2026-02-23T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.239455 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.239524 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.239539 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.239561 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.239574 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:24Z","lastTransitionTime":"2026-02-23T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.259583 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:10:58.08941004 +0000 UTC Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.326386 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:24 crc kubenswrapper[4953]: E0223 00:07:24.326617 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.326386 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:24 crc kubenswrapper[4953]: E0223 00:07:24.327237 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.343914 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.343972 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.343991 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.344018 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.344045 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:24Z","lastTransitionTime":"2026-02-23T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.448078 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.448142 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.448159 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.448185 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.448203 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:24Z","lastTransitionTime":"2026-02-23T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.552738 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.552802 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.552818 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.552847 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.552865 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:24Z","lastTransitionTime":"2026-02-23T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.656836 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.656923 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.656942 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.656976 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.657000 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:24Z","lastTransitionTime":"2026-02-23T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.759605 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.759651 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.759661 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.759679 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.759691 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:24Z","lastTransitionTime":"2026-02-23T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.862871 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.862936 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.862976 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.863000 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.863014 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:24Z","lastTransitionTime":"2026-02-23T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.967724 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.967803 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.967823 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.967852 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:24 crc kubenswrapper[4953]: I0223 00:07:24.967871 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:24Z","lastTransitionTime":"2026-02-23T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.071701 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.071780 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.071803 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.071834 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.071858 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:25Z","lastTransitionTime":"2026-02-23T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.175167 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.175263 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.175326 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.175367 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.175392 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:25Z","lastTransitionTime":"2026-02-23T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.260268 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:21:41.964386807 +0000 UTC Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.279632 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.279709 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.279732 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.279760 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.279782 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:25Z","lastTransitionTime":"2026-02-23T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.326450 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.326529 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:25 crc kubenswrapper[4953]: E0223 00:07:25.326797 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:25 crc kubenswrapper[4953]: E0223 00:07:25.327348 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.383514 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.383589 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.383608 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.383635 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.383657 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:25Z","lastTransitionTime":"2026-02-23T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.488146 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.488241 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.488267 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.488330 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.488365 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:25Z","lastTransitionTime":"2026-02-23T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.591383 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.591475 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.591506 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.591552 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.591579 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:25Z","lastTransitionTime":"2026-02-23T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.694496 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.694552 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.694568 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.694589 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.694603 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:25Z","lastTransitionTime":"2026-02-23T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.797761 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.797808 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.797864 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.797881 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.797891 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:25Z","lastTransitionTime":"2026-02-23T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.900451 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.900511 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.900524 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.900542 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:25 crc kubenswrapper[4953]: I0223 00:07:25.900556 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:25Z","lastTransitionTime":"2026-02-23T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.003022 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.003094 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.003114 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.003136 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.003151 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:26Z","lastTransitionTime":"2026-02-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.107341 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.107420 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.107445 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.107476 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.107500 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:26Z","lastTransitionTime":"2026-02-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.210870 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.210918 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.210929 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.210947 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.210959 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:26Z","lastTransitionTime":"2026-02-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.262227 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 20:55:59.702071192 +0000 UTC Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.314568 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.314653 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.314673 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.314704 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.314722 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:26Z","lastTransitionTime":"2026-02-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.325740 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.325740 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:26 crc kubenswrapper[4953]: E0223 00:07:26.325976 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:26 crc kubenswrapper[4953]: E0223 00:07:26.326073 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.419749 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.419815 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.419828 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.419852 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.419870 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:26Z","lastTransitionTime":"2026-02-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.523559 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.523610 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.523640 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.523664 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.523678 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:26Z","lastTransitionTime":"2026-02-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.626819 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.626891 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.626912 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.626943 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.626962 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:26Z","lastTransitionTime":"2026-02-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.730889 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.731001 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.731026 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.731062 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.731084 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:26Z","lastTransitionTime":"2026-02-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.834553 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.834990 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.835009 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.835035 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.835054 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:26Z","lastTransitionTime":"2026-02-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.938991 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.939085 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.939106 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.939137 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:26 crc kubenswrapper[4953]: I0223 00:07:26.939161 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:26Z","lastTransitionTime":"2026-02-23T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.043262 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.043369 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.043392 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.043420 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.043441 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:27Z","lastTransitionTime":"2026-02-23T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.147706 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.147768 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.147787 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.147818 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.147835 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:27Z","lastTransitionTime":"2026-02-23T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.252157 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.252218 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.252237 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.252264 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.252283 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:27Z","lastTransitionTime":"2026-02-23T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.262975 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 17:07:01.470425603 +0000 UTC Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.325610 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.325693 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:27 crc kubenswrapper[4953]: E0223 00:07:27.325852 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:27 crc kubenswrapper[4953]: E0223 00:07:27.326038 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.356438 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.356509 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.356537 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.356567 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.356588 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:27Z","lastTransitionTime":"2026-02-23T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.459911 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.459987 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.460008 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.460038 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.460061 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:27Z","lastTransitionTime":"2026-02-23T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.564390 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.564473 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.564500 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.564538 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.564564 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:27Z","lastTransitionTime":"2026-02-23T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.675346 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.675400 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.675411 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.675473 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.675489 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:27Z","lastTransitionTime":"2026-02-23T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.778713 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.778787 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.778797 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.778816 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.778832 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:27Z","lastTransitionTime":"2026-02-23T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.882346 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.882415 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.882433 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.882471 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.882494 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:27Z","lastTransitionTime":"2026-02-23T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.986408 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.986485 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.986505 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.986533 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:27 crc kubenswrapper[4953]: I0223 00:07:27.986554 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:27Z","lastTransitionTime":"2026-02-23T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.019617 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs\") pod \"network-metrics-daemon-wppgs\" (UID: \"71837ac6-9a75-4640-af98-633ccdd09e20\") " pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:28 crc kubenswrapper[4953]: E0223 00:07:28.019853 4953 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 00:07:28 crc kubenswrapper[4953]: E0223 00:07:28.019945 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs podName:71837ac6-9a75-4640-af98-633ccdd09e20 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:36.01991851 +0000 UTC m=+53.953760396 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs") pod "network-metrics-daemon-wppgs" (UID: "71837ac6-9a75-4640-af98-633ccdd09e20") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.090387 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.090459 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.090491 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.090522 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.090544 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:28Z","lastTransitionTime":"2026-02-23T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.193744 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.193823 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.193841 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.193873 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.193895 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:28Z","lastTransitionTime":"2026-02-23T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.263670 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:14:48.70169083 +0000 UTC Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.296563 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.296626 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.296639 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.296660 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.296673 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:28Z","lastTransitionTime":"2026-02-23T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.325592 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:28 crc kubenswrapper[4953]: E0223 00:07:28.325719 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.325607 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.325932 4953 scope.go:117] "RemoveContainer" containerID="29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b" Feb 23 00:07:28 crc kubenswrapper[4953]: E0223 00:07:28.326160 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.400892 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.401093 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.401120 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.401152 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.401173 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:28Z","lastTransitionTime":"2026-02-23T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.503927 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.503977 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.503990 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.504013 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.504027 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:28Z","lastTransitionTime":"2026-02-23T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.608184 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.608240 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.608257 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.608310 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.608329 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:28Z","lastTransitionTime":"2026-02-23T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.711718 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.711793 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.711805 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.711829 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.711845 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:28Z","lastTransitionTime":"2026-02-23T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.720651 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.723446 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3"} Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.723897 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.745628 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.766314 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.783701 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.802664 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.815206 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.815347 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.815363 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.815385 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.815410 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:28Z","lastTransitionTime":"2026-02-23T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.818749 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.839613 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.854836 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.873554 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.896189 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.918256 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.918328 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.918342 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.918363 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.918377 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:28Z","lastTransitionTime":"2026-02-23T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.922360 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:18Z\\\",\\\"message\\\":\\\".com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 00:07:17.866603 6218 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.866871 6218 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 00:07:17.867043 6218 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 00:07:17.867319 6218 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.867653 6218 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.867705 6218 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 00:07:17.868277 6218 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 00:07:17.868437 6218 factory.go:656] Stopping watch factory\\\\nI0223 00:07:17.868499 6218 ovnkube.go:599] Stopped ovnkube\\\\nI0223 00:07:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"message\\\":\\\"_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 00:07:19.656901 6407 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0223 00:07:19.656916 6407 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0223 00:07:19.656924 6407 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.938418 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.956272 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.969445 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.984106 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:28 crc kubenswrapper[4953]: I0223 00:07:28.998911 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.013168 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.021434 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.021510 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.021531 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.021562 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.021581 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:29Z","lastTransitionTime":"2026-02-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.124003 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.124073 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.124085 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.124100 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.124110 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:29Z","lastTransitionTime":"2026-02-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.227248 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.227309 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.227319 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.227339 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.227350 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:29Z","lastTransitionTime":"2026-02-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.264141 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:28:54.665308058 +0000 UTC Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.325843 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.326018 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:29 crc kubenswrapper[4953]: E0223 00:07:29.326084 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:29 crc kubenswrapper[4953]: E0223 00:07:29.326234 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.330370 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.330412 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.330426 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.330444 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.330454 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:29Z","lastTransitionTime":"2026-02-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.433606 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.433659 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.433674 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.433693 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.433705 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:29Z","lastTransitionTime":"2026-02-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.536571 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.536621 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.536633 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.536653 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.536666 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:29Z","lastTransitionTime":"2026-02-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.638597 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.638696 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.638716 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.638750 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.638775 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:29Z","lastTransitionTime":"2026-02-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.741268 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.741329 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.741337 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.741354 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.741365 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:29Z","lastTransitionTime":"2026-02-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.844148 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.844186 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.844196 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.844208 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.844219 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:29Z","lastTransitionTime":"2026-02-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.947372 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.947415 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.947425 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.947442 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:29 crc kubenswrapper[4953]: I0223 00:07:29.947454 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:29Z","lastTransitionTime":"2026-02-23T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.050140 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.050204 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.050221 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.050245 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.050263 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:30Z","lastTransitionTime":"2026-02-23T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.153386 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.153460 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.153484 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.153513 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.153534 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:30Z","lastTransitionTime":"2026-02-23T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.257041 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.257137 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.257150 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.257171 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.257186 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:30Z","lastTransitionTime":"2026-02-23T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.264531 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 16:08:10.123846482 +0000 UTC Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.325893 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.325943 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:30 crc kubenswrapper[4953]: E0223 00:07:30.326088 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:30 crc kubenswrapper[4953]: E0223 00:07:30.326511 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.361037 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.361108 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.361121 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.361142 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.361155 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:30Z","lastTransitionTime":"2026-02-23T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.464692 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.464775 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.464796 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.464861 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.464885 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:30Z","lastTransitionTime":"2026-02-23T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.568440 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.568535 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.568572 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.568614 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.568644 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:30Z","lastTransitionTime":"2026-02-23T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.672561 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.672656 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.672677 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.672714 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.672736 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:30Z","lastTransitionTime":"2026-02-23T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.776365 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.776430 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.776443 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.776471 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.776491 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:30Z","lastTransitionTime":"2026-02-23T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.878875 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.878907 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.878915 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.878934 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.878949 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:30Z","lastTransitionTime":"2026-02-23T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.981449 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.981481 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.981490 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.981502 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:30 crc kubenswrapper[4953]: I0223 00:07:30.981512 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:30Z","lastTransitionTime":"2026-02-23T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.084271 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.084361 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.084373 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.084398 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.084411 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:31Z","lastTransitionTime":"2026-02-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.113900 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.113937 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.113946 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.113964 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.113975 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:31Z","lastTransitionTime":"2026-02-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:31 crc kubenswrapper[4953]: E0223 00:07:31.134098 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.138930 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.138962 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.138974 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.138993 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.139008 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:31Z","lastTransitionTime":"2026-02-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:31 crc kubenswrapper[4953]: E0223 00:07:31.154063 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.157646 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.157711 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.157723 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.157747 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.157761 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:31Z","lastTransitionTime":"2026-02-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:31 crc kubenswrapper[4953]: E0223 00:07:31.175613 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.181343 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.181488 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.181614 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.181740 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.181886 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:31Z","lastTransitionTime":"2026-02-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:31 crc kubenswrapper[4953]: E0223 00:07:31.202157 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.206969 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.207012 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.207027 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.207047 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.207061 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:31Z","lastTransitionTime":"2026-02-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:31 crc kubenswrapper[4953]: E0223 00:07:31.227579 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:31 crc kubenswrapper[4953]: E0223 00:07:31.228040 4953 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.230942 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.231133 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.231538 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.231817 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.232024 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:31Z","lastTransitionTime":"2026-02-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.265116 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 21:41:06.262971164 +0000 UTC Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.325817 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.325935 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:31 crc kubenswrapper[4953]: E0223 00:07:31.326115 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:31 crc kubenswrapper[4953]: E0223 00:07:31.326353 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.335606 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.335693 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.335723 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.335758 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.335792 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:31Z","lastTransitionTime":"2026-02-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.438277 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.438400 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.438429 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.438459 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.438479 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:31Z","lastTransitionTime":"2026-02-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.542510 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.542869 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.542995 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.543135 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.543262 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:31Z","lastTransitionTime":"2026-02-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.645985 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.646339 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.646647 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.646950 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.647283 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:31Z","lastTransitionTime":"2026-02-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.752565 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.752609 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.752620 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.752635 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.752644 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:31Z","lastTransitionTime":"2026-02-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.856014 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.856084 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.856102 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.856132 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.856150 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:31Z","lastTransitionTime":"2026-02-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.960763 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.960837 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.960855 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.960882 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:31 crc kubenswrapper[4953]: I0223 00:07:31.960901 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:31Z","lastTransitionTime":"2026-02-23T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.065035 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.065100 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.065119 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.065150 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.065171 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:32Z","lastTransitionTime":"2026-02-23T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.168111 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.168170 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.168187 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.168212 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.168229 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:32Z","lastTransitionTime":"2026-02-23T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.266515 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 14:12:50.690104034 +0000 UTC Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.271757 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.271824 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.271844 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.271869 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.271889 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:32Z","lastTransitionTime":"2026-02-23T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.326541 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.326641 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:32 crc kubenswrapper[4953]: E0223 00:07:32.326736 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:32 crc kubenswrapper[4953]: E0223 00:07:32.327085 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.375654 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.375709 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.375725 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.375749 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.375765 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:32Z","lastTransitionTime":"2026-02-23T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.479906 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.479969 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.479988 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.480018 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.480039 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:32Z","lastTransitionTime":"2026-02-23T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.583550 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.583602 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.583620 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.583647 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.583669 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:32Z","lastTransitionTime":"2026-02-23T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.688112 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.688560 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.688648 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.688747 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.688889 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:32Z","lastTransitionTime":"2026-02-23T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.791837 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.791883 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.791892 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.791914 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.791923 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:32Z","lastTransitionTime":"2026-02-23T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.895003 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.895072 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.895090 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.895117 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.895137 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:32Z","lastTransitionTime":"2026-02-23T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.997841 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.997925 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.997950 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.997989 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:32 crc kubenswrapper[4953]: I0223 00:07:32.998014 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:32Z","lastTransitionTime":"2026-02-23T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.101932 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.102005 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.102023 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.102050 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.102070 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:33Z","lastTransitionTime":"2026-02-23T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.206093 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.206149 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.206167 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.206189 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.206204 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:33Z","lastTransitionTime":"2026-02-23T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.267709 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:01:01.509986511 +0000 UTC Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.310162 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.310235 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.310254 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.310306 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.310327 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:33Z","lastTransitionTime":"2026-02-23T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.325667 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.325846 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:33 crc kubenswrapper[4953]: E0223 00:07:33.326052 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:33 crc kubenswrapper[4953]: E0223 00:07:33.326255 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.350717 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.374559 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.393278 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.413203 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.413762 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.413837 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.413855 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.413883 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.413913 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:33Z","lastTransitionTime":"2026-02-23T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.436871 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.461988 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.499930 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5d6c756e1402dbb308effcbb6a1739c2b1f16279f14af943131eeee813bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:18Z\\\",\\\"message\\\":\\\".com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 00:07:17.866603 6218 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.866871 6218 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 00:07:17.867043 6218 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 00:07:17.867319 6218 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.867653 6218 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 00:07:17.867705 6218 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 00:07:17.868277 6218 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 00:07:17.868437 6218 factory.go:656] Stopping watch factory\\\\nI0223 00:07:17.868499 6218 ovnkube.go:599] Stopped ovnkube\\\\nI0223 00:07:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"message\\\":\\\"_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 00:07:19.656901 6407 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0223 00:07:19.656916 6407 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0223 00:07:19.656924 6407 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.517564 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.517617 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.517626 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.517644 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.517655 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:33Z","lastTransitionTime":"2026-02-23T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.520618 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.542221 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.564118 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.587114 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.604989 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.620736 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.620801 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.620820 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.620849 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.620868 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:33Z","lastTransitionTime":"2026-02-23T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.627030 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.649024 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.666934 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.687904 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.724166 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.724228 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.724246 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.724275 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.724323 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:33Z","lastTransitionTime":"2026-02-23T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.828209 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.828323 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.828344 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.828380 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.828407 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:33Z","lastTransitionTime":"2026-02-23T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.932274 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.932361 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.932382 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.932409 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:33 crc kubenswrapper[4953]: I0223 00:07:33.932428 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:33Z","lastTransitionTime":"2026-02-23T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.035147 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.035245 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.035275 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.035359 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.035386 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:34Z","lastTransitionTime":"2026-02-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.137219 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.137246 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.137304 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.137322 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.137332 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:34Z","lastTransitionTime":"2026-02-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.239987 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.240038 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.240050 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.240066 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.240077 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:34Z","lastTransitionTime":"2026-02-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.268861 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 01:50:16.408113115 +0000 UTC Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.325463 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.325507 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:34 crc kubenswrapper[4953]: E0223 00:07:34.325579 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:34 crc kubenswrapper[4953]: E0223 00:07:34.325762 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.343537 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.343598 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.343611 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.343628 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.343638 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:34Z","lastTransitionTime":"2026-02-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.446675 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.446736 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.446753 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.446778 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.446796 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:34Z","lastTransitionTime":"2026-02-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.549754 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.549849 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.549870 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.549902 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.549926 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:34Z","lastTransitionTime":"2026-02-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.653399 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.653474 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.653565 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.653597 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.653618 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:34Z","lastTransitionTime":"2026-02-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.756713 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.756778 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.756801 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.756832 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.756851 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:34Z","lastTransitionTime":"2026-02-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.860596 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.860672 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.860691 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.860723 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.860741 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:34Z","lastTransitionTime":"2026-02-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.964431 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.964505 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.964524 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.964554 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:34 crc kubenswrapper[4953]: I0223 00:07:34.964573 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:34Z","lastTransitionTime":"2026-02-23T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.069330 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.069412 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.069431 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.069460 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.069480 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:35Z","lastTransitionTime":"2026-02-23T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.172235 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.172340 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.172363 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.172392 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.172414 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:35Z","lastTransitionTime":"2026-02-23T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.269469 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:55:49.123594597 +0000 UTC Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.276313 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.276389 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.276402 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.276423 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.276438 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:35Z","lastTransitionTime":"2026-02-23T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.326336 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.326645 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:35 crc kubenswrapper[4953]: E0223 00:07:35.326858 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.326973 4953 scope.go:117] "RemoveContainer" containerID="770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a" Feb 23 00:07:35 crc kubenswrapper[4953]: E0223 00:07:35.326991 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.352792 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.380113 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.380194 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.380213 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.380244 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.380265 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:35Z","lastTransitionTime":"2026-02-23T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.380480 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.414024 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"message\\\":\\\"_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 00:07:19.656901 6407 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0223 00:07:19.656916 6407 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0223 00:07:19.656924 6407 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-69mr8_openshift-ovn-kubernetes(5937f1d2-1966-4337-b099-ad0af539fe11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.430561 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.454936 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.473855 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.483488 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.483546 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.483558 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.483579 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.483594 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:35Z","lastTransitionTime":"2026-02-23T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.495885 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.512418 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.530645 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.547891 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.567060 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.584932 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.587800 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.587885 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.587900 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.587952 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.587969 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:35Z","lastTransitionTime":"2026-02-23T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.609189 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.627648 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.642146 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.656510 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.691672 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.691724 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.691733 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.691748 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.691758 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:35Z","lastTransitionTime":"2026-02-23T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.760441 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovnkube-controller/1.log" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.764348 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerStarted","Data":"be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a"} Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.764619 4953 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.786765 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.795016 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.795099 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.795119 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.795153 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.795177 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:35Z","lastTransitionTime":"2026-02-23T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.816558 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.839036 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"message\\\":\\\"_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 00:07:19.656901 6407 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0223 00:07:19.656916 6407 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0223 00:07:19.656924 6407 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.857364 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.882462 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.898180 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.898229 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.898242 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.898261 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.898273 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:35Z","lastTransitionTime":"2026-02-23T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.917738 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.932737 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.945352 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.958394 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.970546 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:35 crc kubenswrapper[4953]: I0223 00:07:35.988405 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.001118 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.001171 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.001185 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.001212 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.001227 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:36Z","lastTransitionTime":"2026-02-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.003275 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.023223 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.034710 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.037331 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs\") pod \"network-metrics-daemon-wppgs\" (UID: \"71837ac6-9a75-4640-af98-633ccdd09e20\") " pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:36 crc kubenswrapper[4953]: E0223 00:07:36.037456 4953 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 00:07:36 crc kubenswrapper[4953]: E0223 00:07:36.037518 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs podName:71837ac6-9a75-4640-af98-633ccdd09e20 nodeName:}" failed. No retries permitted until 2026-02-23 00:07:52.037499756 +0000 UTC m=+69.971341622 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs") pod "network-metrics-daemon-wppgs" (UID: "71837ac6-9a75-4640-af98-633ccdd09e20") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.047141 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.058127 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.067114 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.104145 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.104207 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.104224 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.104252 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.104272 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:36Z","lastTransitionTime":"2026-02-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.206666 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.206737 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.206751 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.206776 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.206792 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:36Z","lastTransitionTime":"2026-02-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.270585 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 15:33:46.973329672 +0000 UTC Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.310299 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.310365 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.310377 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.310402 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.310416 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:36Z","lastTransitionTime":"2026-02-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.325565 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.325715 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:36 crc kubenswrapper[4953]: E0223 00:07:36.325924 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:36 crc kubenswrapper[4953]: E0223 00:07:36.325730 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.413776 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.413842 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.413856 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.413879 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.413907 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:36Z","lastTransitionTime":"2026-02-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.516998 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.517043 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.517053 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.517067 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.517076 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:36Z","lastTransitionTime":"2026-02-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.620126 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.620181 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.620198 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.620222 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.620240 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:36Z","lastTransitionTime":"2026-02-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.723793 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.723868 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.723890 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.723922 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.723943 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:36Z","lastTransitionTime":"2026-02-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.770732 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovnkube-controller/2.log" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.771553 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovnkube-controller/1.log" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.775060 4953 generic.go:334] "Generic (PLEG): container finished" podID="5937f1d2-1966-4337-b099-ad0af539fe11" containerID="be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a" exitCode=1 Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.775114 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerDied","Data":"be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a"} Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.775172 4953 scope.go:117] "RemoveContainer" containerID="770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.775667 4953 scope.go:117] "RemoveContainer" containerID="be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a" Feb 23 00:07:36 crc kubenswrapper[4953]: E0223 00:07:36.775808 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-69mr8_openshift-ovn-kubernetes(5937f1d2-1966-4337-b099-ad0af539fe11)\"" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.789170 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.802374 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.815926 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.826980 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.827026 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.827038 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.827055 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.827067 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:36Z","lastTransitionTime":"2026-02-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.837116 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.852190 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.865597 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.875617 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.893487 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.909426 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.929541 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.929459 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://770a459232afde0351797090e656b7ee8b69e031ac6e1c96dc318f048e050f4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"message\\\":\\\"_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 00:07:19.656901 6407 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0223 00:07:19.656916 6407 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0223 00:07:19.656924 6407 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:36Z\\\",\\\"message\\\":\\\"07:37.762976516 +0000 UTC m=+2.095780407): skip\\\\nI0223 00:07:36.307081 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nF0223 00:07:36.307042 6625 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z]\\\\nI0223 00:07:36.307099 6625 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 126.663µs)\\\\nI0223 00:07:36.307115 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storage-version-mi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.930168 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.930195 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.930216 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.930233 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:36Z","lastTransitionTime":"2026-02-23T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.945104 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.957666 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.969853 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:36 crc kubenswrapper[4953]: I0223 00:07:36.984583 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.003638 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.015675 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.032990 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.033063 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.033075 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.033094 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.033106 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:37Z","lastTransitionTime":"2026-02-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.136034 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.136072 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.136081 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.136095 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.136105 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:37Z","lastTransitionTime":"2026-02-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.149250 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.149480 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:37 crc kubenswrapper[4953]: E0223 00:07:37.149501 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:08:09.149462274 +0000 UTC m=+87.083304140 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.149573 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.149628 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.149679 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:37 crc kubenswrapper[4953]: E0223 00:07:37.149731 4953 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 00:07:37 crc kubenswrapper[4953]: E0223 00:07:37.149820 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 00:08:09.149796942 +0000 UTC m=+87.083638788 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 00:07:37 crc kubenswrapper[4953]: E0223 00:07:37.149852 4953 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 00:07:37 crc kubenswrapper[4953]: E0223 00:07:37.149890 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 00:07:37 crc kubenswrapper[4953]: E0223 00:07:37.149894 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 00:07:37 crc kubenswrapper[4953]: E0223 00:07:37.149912 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 00:07:37 crc kubenswrapper[4953]: E0223 00:07:37.149932 4953 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:37 crc kubenswrapper[4953]: E0223 00:07:37.149931 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 00:07:37 crc kubenswrapper[4953]: E0223 00:07:37.149962 4953 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:37 crc kubenswrapper[4953]: E0223 00:07:37.149975 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 00:08:09.149944046 +0000 UTC m=+87.083785922 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 00:07:37 crc kubenswrapper[4953]: E0223 00:07:37.150006 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 00:08:09.149992307 +0000 UTC m=+87.083834193 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:37 crc kubenswrapper[4953]: E0223 00:07:37.150067 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 00:08:09.150031568 +0000 UTC m=+87.083873614 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.238527 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.238565 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.238576 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.238593 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.238604 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:37Z","lastTransitionTime":"2026-02-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.271160 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:55:59.445108639 +0000 UTC Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.325600 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:37 crc kubenswrapper[4953]: E0223 00:07:37.325709 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.325759 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:37 crc kubenswrapper[4953]: E0223 00:07:37.325906 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.340380 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.340423 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.340431 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.340445 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.340455 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:37Z","lastTransitionTime":"2026-02-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.443681 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.443728 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.443737 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.443752 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.443762 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:37Z","lastTransitionTime":"2026-02-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.545536 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.545565 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.545573 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.545585 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.545594 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:37Z","lastTransitionTime":"2026-02-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.648213 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.648254 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.648262 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.648277 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.648317 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:37Z","lastTransitionTime":"2026-02-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.750938 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.750999 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.751018 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.751042 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.751056 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:37Z","lastTransitionTime":"2026-02-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.781218 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovnkube-controller/2.log" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.785056 4953 scope.go:117] "RemoveContainer" containerID="be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a" Feb 23 00:07:37 crc kubenswrapper[4953]: E0223 00:07:37.785197 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-69mr8_openshift-ovn-kubernetes(5937f1d2-1966-4337-b099-ad0af539fe11)\"" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.797508 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.816810 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.829189 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.832622 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.843511 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.848599 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:36Z\\\",\\\"message\\\":\\\"07:37.762976516 +0000 UTC m=+2.095780407): skip\\\\nI0223 00:07:36.307081 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nF0223 00:07:36.307042 6625 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z]\\\\nI0223 00:07:36.307099 6625 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 126.663µs)\\\\nI0223 00:07:36.307115 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storage-version-mi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-69mr8_openshift-ovn-kubernetes(5937f1d2-1966-4337-b099-ad0af539fe11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.853446 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.853499 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.853511 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.853531 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.853546 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:37Z","lastTransitionTime":"2026-02-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.863137 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.877799 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.890825 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.904160 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.913400 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.925022 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.940944 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.952171 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.955639 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.955675 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.955684 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.955697 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.955706 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:37Z","lastTransitionTime":"2026-02-23T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.964681 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.978929 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:37 crc kubenswrapper[4953]: I0223 00:07:37.990373 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.004526 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.016886 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de7b12-36af-448d-a1e7-2a7153ee8f48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a2b94bf3e39dc5cf412426c8c8e30a9e085d5a10bcf84ea64d01a6371c86ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6056e78ae465353d500663bf00838ba97b6dcd8ca6e286b5eb2427dbfbfc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b17f060992b08016c65d1faca4eedf4c8a2111b6540143d00dd192d736f0197e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.028885 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.040953 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.050538 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.057949 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.057983 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.057992 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.058007 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.058017 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:38Z","lastTransitionTime":"2026-02-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.060006 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.072378 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.083780 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.106217 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.120506 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.133138 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.142004 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.153899 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.160335 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.160366 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.160378 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.160396 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.160408 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:38Z","lastTransitionTime":"2026-02-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.177546 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.197174 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:36Z\\\",\\\"message\\\":\\\"07:37.762976516 +0000 UTC m=+2.095780407): skip\\\\nI0223 00:07:36.307081 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nF0223 00:07:36.307042 6625 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z]\\\\nI0223 00:07:36.307099 6625 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 126.663µs)\\\\nI0223 00:07:36.307115 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storage-version-mi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-69mr8_openshift-ovn-kubernetes(5937f1d2-1966-4337-b099-ad0af539fe11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.210443 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.225000 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.237408 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:38Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.262669 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.262694 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.262702 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.262715 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.262726 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:38Z","lastTransitionTime":"2026-02-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.272171 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 22:38:36.103963164 +0000 UTC Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.325349 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.325349 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:38 crc kubenswrapper[4953]: E0223 00:07:38.325477 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:38 crc kubenswrapper[4953]: E0223 00:07:38.325529 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.365534 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.365569 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.365578 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.365592 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.365601 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:38Z","lastTransitionTime":"2026-02-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.467990 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.468035 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.468046 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.468066 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.468077 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:38Z","lastTransitionTime":"2026-02-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.570971 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.571065 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.571085 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.571114 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.571134 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:38Z","lastTransitionTime":"2026-02-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.674250 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.674330 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.674346 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.674365 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.674376 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:38Z","lastTransitionTime":"2026-02-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.782113 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.782154 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.782165 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.782181 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.782191 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:38Z","lastTransitionTime":"2026-02-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.885454 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.885500 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.885512 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.885537 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.885552 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:38Z","lastTransitionTime":"2026-02-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.988807 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.988871 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.988886 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.988907 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:38 crc kubenswrapper[4953]: I0223 00:07:38.988923 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:38Z","lastTransitionTime":"2026-02-23T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.092053 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.092122 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.092144 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.092168 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.092182 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:39Z","lastTransitionTime":"2026-02-23T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.195728 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.195783 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.195794 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.195813 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.195824 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:39Z","lastTransitionTime":"2026-02-23T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.272360 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 11:01:00.210123538 +0000 UTC Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.298078 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.298133 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.298145 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.298163 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.298176 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:39Z","lastTransitionTime":"2026-02-23T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.325712 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.325727 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:39 crc kubenswrapper[4953]: E0223 00:07:39.325847 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:39 crc kubenswrapper[4953]: E0223 00:07:39.326017 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.404896 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.405008 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.405028 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.405054 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.405077 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:39Z","lastTransitionTime":"2026-02-23T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.508460 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.508520 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.508536 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.508558 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.508574 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:39Z","lastTransitionTime":"2026-02-23T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.611091 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.611161 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.611178 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.611202 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.611219 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:39Z","lastTransitionTime":"2026-02-23T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.714642 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.714693 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.714704 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.714721 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.714734 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:39Z","lastTransitionTime":"2026-02-23T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.816384 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.816425 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.816434 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.816448 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.816458 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:39Z","lastTransitionTime":"2026-02-23T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.919355 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.919433 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.919452 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.919484 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:39 crc kubenswrapper[4953]: I0223 00:07:39.919512 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:39Z","lastTransitionTime":"2026-02-23T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.021984 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.022052 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.022065 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.022084 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.022100 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:40Z","lastTransitionTime":"2026-02-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.124703 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.124806 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.124824 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.124852 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.124870 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:40Z","lastTransitionTime":"2026-02-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.228254 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.228363 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.228381 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.228410 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.228430 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:40Z","lastTransitionTime":"2026-02-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.272902 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:58:56.702383606 +0000 UTC Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.326047 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.326141 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:40 crc kubenswrapper[4953]: E0223 00:07:40.326201 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:40 crc kubenswrapper[4953]: E0223 00:07:40.326390 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.332227 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.332263 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.332276 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.332456 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.332478 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:40Z","lastTransitionTime":"2026-02-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.436087 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.436158 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.436181 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.436216 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.436237 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:40Z","lastTransitionTime":"2026-02-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.539005 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.539062 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.539078 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.539101 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.539118 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:40Z","lastTransitionTime":"2026-02-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.642383 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.642457 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.642478 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.642504 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.642522 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:40Z","lastTransitionTime":"2026-02-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.745381 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.745426 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.745439 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.745456 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.745468 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:40Z","lastTransitionTime":"2026-02-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.847735 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.847799 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.847816 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.847840 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.847853 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:40Z","lastTransitionTime":"2026-02-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.950411 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.950471 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.950488 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.950510 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:40 crc kubenswrapper[4953]: I0223 00:07:40.950523 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:40Z","lastTransitionTime":"2026-02-23T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.053565 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.053615 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.053630 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.053649 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.053665 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:41Z","lastTransitionTime":"2026-02-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.156474 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.156545 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.156562 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.156586 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.156603 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:41Z","lastTransitionTime":"2026-02-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.259160 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.259250 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.259269 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.259332 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.259354 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:41Z","lastTransitionTime":"2026-02-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.273770 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 12:32:24.62508122 +0000 UTC Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.291808 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.291849 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.291860 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.291879 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.291897 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:41Z","lastTransitionTime":"2026-02-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:41 crc kubenswrapper[4953]: E0223 00:07:41.311356 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.316600 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.316636 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.316645 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.316664 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.316682 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:41Z","lastTransitionTime":"2026-02-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.325793 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:41 crc kubenswrapper[4953]: E0223 00:07:41.325906 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.325790 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:41 crc kubenswrapper[4953]: E0223 00:07:41.326107 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:41 crc kubenswrapper[4953]: E0223 00:07:41.338471 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.342877 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.342911 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.342923 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.342939 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.342950 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:41Z","lastTransitionTime":"2026-02-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:41 crc kubenswrapper[4953]: E0223 00:07:41.360128 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.365713 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.365785 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.365809 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.365840 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.365867 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:41Z","lastTransitionTime":"2026-02-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:41 crc kubenswrapper[4953]: E0223 00:07:41.383628 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.388657 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.388721 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.388740 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.388767 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.388783 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:41Z","lastTransitionTime":"2026-02-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:41 crc kubenswrapper[4953]: E0223 00:07:41.411030 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:41 crc kubenswrapper[4953]: E0223 00:07:41.411340 4953 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.413591 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.413654 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.413830 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.413883 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.413906 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:41Z","lastTransitionTime":"2026-02-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.517118 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.517192 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.517210 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.517237 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.517257 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:41Z","lastTransitionTime":"2026-02-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.623233 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.623394 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.623455 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.623487 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.623514 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:41Z","lastTransitionTime":"2026-02-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.723226 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.726269 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.726366 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.726393 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.726420 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.726442 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:41Z","lastTransitionTime":"2026-02-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.734662 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.751593 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.770042 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.789400 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.814720 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.829388 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.829434 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.829453 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.829511 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.829531 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:41Z","lastTransitionTime":"2026-02-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.839585 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.871448 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:36Z\\\",\\\"message\\\":\\\"07:37.762976516 +0000 UTC m=+2.095780407): skip\\\\nI0223 00:07:36.307081 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nF0223 00:07:36.307042 6625 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z]\\\\nI0223 00:07:36.307099 6625 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 126.663µs)\\\\nI0223 00:07:36.307115 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storage-version-mi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-69mr8_openshift-ovn-kubernetes(5937f1d2-1966-4337-b099-ad0af539fe11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.892206 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.915563 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.931962 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.932022 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.932038 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.932062 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.932082 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:41Z","lastTransitionTime":"2026-02-23T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.937208 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.957349 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.979031 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:41 crc kubenswrapper[4953]: I0223 00:07:41.999834 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.040586 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.040645 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.040659 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.040680 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.040695 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:42Z","lastTransitionTime":"2026-02-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.041630 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:42Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.066099 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:42Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.088464 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:42Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.103235 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de7b12-36af-448d-a1e7-2a7153ee8f48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a2b94bf3e39dc5cf412426c8c8e30a9e085d5a10bcf84ea64d01a6371c86ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6056e78ae465353d500663bf00838ba97b6dcd8ca6e286b5eb2427dbfbfc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b17f060992b08016c65d1faca4eedf4c8a2111b6540143d00dd192d736f0197e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:42Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.143736 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.143782 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.143794 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.143814 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.143828 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:42Z","lastTransitionTime":"2026-02-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.246343 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.246377 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.246385 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.246397 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.246406 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:42Z","lastTransitionTime":"2026-02-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.274872 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 18:43:02.327059914 +0000 UTC Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.325716 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:42 crc kubenswrapper[4953]: E0223 00:07:42.325877 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.325720 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:42 crc kubenswrapper[4953]: E0223 00:07:42.326025 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.348759 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.348835 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.348851 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.348869 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.348881 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:42Z","lastTransitionTime":"2026-02-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.451081 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.451133 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.451149 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.451170 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.451186 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:42Z","lastTransitionTime":"2026-02-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.553635 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.553691 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.553714 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.553736 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.553752 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:42Z","lastTransitionTime":"2026-02-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.656845 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.656929 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.656951 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.656982 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.657005 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:42Z","lastTransitionTime":"2026-02-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.760605 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.760663 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.760682 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.760705 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.760719 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:42Z","lastTransitionTime":"2026-02-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.863277 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.863339 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.863352 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.863370 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.863382 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:42Z","lastTransitionTime":"2026-02-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.966037 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.966110 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.966134 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.966164 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:42 crc kubenswrapper[4953]: I0223 00:07:42.966186 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:42Z","lastTransitionTime":"2026-02-23T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.069269 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.069362 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.069379 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.069400 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.069415 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:43Z","lastTransitionTime":"2026-02-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.173013 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.173052 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.173062 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.173079 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.173091 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:43Z","lastTransitionTime":"2026-02-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.275630 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 02:46:28.605310532 +0000 UTC Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.276485 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.276553 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.276571 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.276593 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.276607 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:43Z","lastTransitionTime":"2026-02-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.325418 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.325499 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:43 crc kubenswrapper[4953]: E0223 00:07:43.325568 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:43 crc kubenswrapper[4953]: E0223 00:07:43.325700 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.341738 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.354672 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.370386 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.379352 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.379402 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.379422 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.379447 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.379464 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:43Z","lastTransitionTime":"2026-02-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.390093 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.418856 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:36Z\\\",\\\"message\\\":\\\"07:37.762976516 +0000 UTC m=+2.095780407): skip\\\\nI0223 00:07:36.307081 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nF0223 00:07:36.307042 6625 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z]\\\\nI0223 00:07:36.307099 6625 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 126.663µs)\\\\nI0223 00:07:36.307115 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storage-version-mi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-69mr8_openshift-ovn-kubernetes(5937f1d2-1966-4337-b099-ad0af539fe11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.433636 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.457540 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.477674 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.482145 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.482204 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.482221 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.482244 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.482262 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:43Z","lastTransitionTime":"2026-02-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.496076 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.508943 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.523242 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.543628 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.561929 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.578374 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.584451 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.584502 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.584516 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.584539 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.584553 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:43Z","lastTransitionTime":"2026-02-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.595506 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de7b12-36af-448d-a1e7-2a7153ee8f48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a2b94bf3e39dc5cf412426c8c8e30a9e085d5a10bcf84ea64d01a6371c86ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6056e78ae465353d500663bf00838ba97b6dcd8ca6e286b5eb2427dbfbfc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b17f060992b08016c65d1faca4eedf4c8a2111b6540143d00dd192d736f0197e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.618423 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.632522 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.687951 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.688427 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.688439 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.688456 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.688466 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:43Z","lastTransitionTime":"2026-02-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.790878 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.790910 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.790920 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.790936 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.790946 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:43Z","lastTransitionTime":"2026-02-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.893891 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.893932 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.893940 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.893956 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.893967 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:43Z","lastTransitionTime":"2026-02-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.996705 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.996769 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.996788 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.996813 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:43 crc kubenswrapper[4953]: I0223 00:07:43.996832 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:43Z","lastTransitionTime":"2026-02-23T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.100855 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.100924 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.100944 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.100972 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.100992 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:44Z","lastTransitionTime":"2026-02-23T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.203526 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.203567 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.203579 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.203593 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.203605 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:44Z","lastTransitionTime":"2026-02-23T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.276097 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 10:58:26.403565514 +0000 UTC Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.306263 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.306383 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.306409 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.306439 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.306463 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:44Z","lastTransitionTime":"2026-02-23T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.326371 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.326449 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:44 crc kubenswrapper[4953]: E0223 00:07:44.326898 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:44 crc kubenswrapper[4953]: E0223 00:07:44.327043 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.409387 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.409419 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.409444 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.409457 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.409467 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:44Z","lastTransitionTime":"2026-02-23T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.511610 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.511648 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.511682 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.511697 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.511706 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:44Z","lastTransitionTime":"2026-02-23T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.614950 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.615003 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.615013 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.615026 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.615035 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:44Z","lastTransitionTime":"2026-02-23T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.717823 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.718052 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.718189 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.718277 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.718374 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:44Z","lastTransitionTime":"2026-02-23T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.821075 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.821121 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.821132 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.821149 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.821161 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:44Z","lastTransitionTime":"2026-02-23T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.924323 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.924383 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.924401 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.924429 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:44 crc kubenswrapper[4953]: I0223 00:07:44.924446 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:44Z","lastTransitionTime":"2026-02-23T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.028424 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.028501 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.028531 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.028564 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.028585 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:45Z","lastTransitionTime":"2026-02-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.131538 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.131636 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.131662 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.131701 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.131727 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:45Z","lastTransitionTime":"2026-02-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.235710 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.235881 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.235915 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.235949 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.235972 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:45Z","lastTransitionTime":"2026-02-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.278477 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:10:10.034179685 +0000 UTC Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.325755 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:45 crc kubenswrapper[4953]: E0223 00:07:45.326032 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.325799 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:45 crc kubenswrapper[4953]: E0223 00:07:45.326483 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.341795 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.341862 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.341879 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.341904 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.341924 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:45Z","lastTransitionTime":"2026-02-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.444712 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.444757 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.444771 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.444790 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.444805 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:45Z","lastTransitionTime":"2026-02-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.548546 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.548640 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.548653 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.548669 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.548681 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:45Z","lastTransitionTime":"2026-02-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.651365 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.651635 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.651771 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.651920 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.652052 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:45Z","lastTransitionTime":"2026-02-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.757114 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.757159 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.757169 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.757186 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.757201 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:45Z","lastTransitionTime":"2026-02-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.860339 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.860382 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.860397 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.860418 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.860429 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:45Z","lastTransitionTime":"2026-02-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.963552 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.963637 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.963665 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.963706 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:45 crc kubenswrapper[4953]: I0223 00:07:45.963734 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:45Z","lastTransitionTime":"2026-02-23T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.067151 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.067208 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.067226 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.067255 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.067273 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:46Z","lastTransitionTime":"2026-02-23T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.171100 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.171183 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.171206 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.171244 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.171264 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:46Z","lastTransitionTime":"2026-02-23T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.274953 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.275034 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.275062 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.275100 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.275125 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:46Z","lastTransitionTime":"2026-02-23T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.279345 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 21:06:20.777582442 +0000 UTC Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.325903 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.325936 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:46 crc kubenswrapper[4953]: E0223 00:07:46.326059 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:46 crc kubenswrapper[4953]: E0223 00:07:46.326195 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.377612 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.377672 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.377689 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.377715 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.377733 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:46Z","lastTransitionTime":"2026-02-23T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.481093 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.481209 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.481246 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.481323 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.481351 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:46Z","lastTransitionTime":"2026-02-23T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.584776 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.584834 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.584850 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.584874 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.584892 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:46Z","lastTransitionTime":"2026-02-23T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.687829 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.687884 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.687899 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.687921 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.687934 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:46Z","lastTransitionTime":"2026-02-23T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.791015 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.791107 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.791125 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.791151 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.791167 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:46Z","lastTransitionTime":"2026-02-23T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.894093 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.894144 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.894156 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.894175 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.894189 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:46Z","lastTransitionTime":"2026-02-23T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.997114 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.997155 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.997164 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.997177 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:46 crc kubenswrapper[4953]: I0223 00:07:46.997188 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:46Z","lastTransitionTime":"2026-02-23T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.100824 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.100897 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.100924 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.100957 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.100979 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:47Z","lastTransitionTime":"2026-02-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.204866 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.204940 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.204961 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.204991 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.205009 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:47Z","lastTransitionTime":"2026-02-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.279670 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 15:28:41.484545652 +0000 UTC Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.309595 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.309661 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.309682 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.309727 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.309755 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:47Z","lastTransitionTime":"2026-02-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.326259 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:47 crc kubenswrapper[4953]: E0223 00:07:47.326540 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.326639 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:47 crc kubenswrapper[4953]: E0223 00:07:47.326915 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.412884 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.412988 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.413010 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.413035 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.413052 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:47Z","lastTransitionTime":"2026-02-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.516354 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.516412 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.516429 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.516452 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.516472 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:47Z","lastTransitionTime":"2026-02-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.619191 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.619236 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.619254 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.619274 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.619312 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:47Z","lastTransitionTime":"2026-02-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.723958 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.724015 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.724033 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.724062 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.724082 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:47Z","lastTransitionTime":"2026-02-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.826222 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.826264 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.826275 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.826313 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.826323 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:47Z","lastTransitionTime":"2026-02-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.930017 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.930086 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.930105 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.930132 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:47 crc kubenswrapper[4953]: I0223 00:07:47.930153 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:47Z","lastTransitionTime":"2026-02-23T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.034506 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.034579 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.034590 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.034611 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.034626 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:48Z","lastTransitionTime":"2026-02-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.137396 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.137459 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.137475 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.137500 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.137516 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:48Z","lastTransitionTime":"2026-02-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.240667 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.240727 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.240737 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.240758 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.240771 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:48Z","lastTransitionTime":"2026-02-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.280348 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:06:47.930407354 +0000 UTC Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.325856 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.325856 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:48 crc kubenswrapper[4953]: E0223 00:07:48.326098 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:48 crc kubenswrapper[4953]: E0223 00:07:48.326159 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.343865 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.343927 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.343944 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.343973 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.343990 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:48Z","lastTransitionTime":"2026-02-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.448566 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.448650 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.448681 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.448719 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.448747 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:48Z","lastTransitionTime":"2026-02-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.552355 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.552419 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.552436 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.552461 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.552483 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:48Z","lastTransitionTime":"2026-02-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.654850 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.654940 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.654963 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.654998 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.655020 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:48Z","lastTransitionTime":"2026-02-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.758182 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.758258 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.758280 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.758334 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.758354 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:48Z","lastTransitionTime":"2026-02-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.862270 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.862330 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.862340 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.862359 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.862397 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:48Z","lastTransitionTime":"2026-02-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.964661 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.964746 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.964765 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.964801 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:48 crc kubenswrapper[4953]: I0223 00:07:48.964823 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:48Z","lastTransitionTime":"2026-02-23T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.068039 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.068098 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.068111 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.068128 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.068140 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:49Z","lastTransitionTime":"2026-02-23T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.172050 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.172123 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.172144 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.172178 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.172199 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:49Z","lastTransitionTime":"2026-02-23T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.275068 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.275133 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.275150 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.275175 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.275194 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:49Z","lastTransitionTime":"2026-02-23T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.281147 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:59:20.103916109 +0000 UTC Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.325987 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.326077 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:49 crc kubenswrapper[4953]: E0223 00:07:49.326143 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:49 crc kubenswrapper[4953]: E0223 00:07:49.326303 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.377136 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.377208 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.377228 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.377256 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.377274 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:49Z","lastTransitionTime":"2026-02-23T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.480311 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.480355 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.480365 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.480384 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.480395 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:49Z","lastTransitionTime":"2026-02-23T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.582718 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.582759 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.582768 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.582783 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.582792 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:49Z","lastTransitionTime":"2026-02-23T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.684904 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.684956 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.684967 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.685028 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.685041 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:49Z","lastTransitionTime":"2026-02-23T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.787160 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.787199 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.787207 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.787224 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.787234 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:49Z","lastTransitionTime":"2026-02-23T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.890441 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.890493 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.890509 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.890535 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.890554 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:49Z","lastTransitionTime":"2026-02-23T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.993354 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.993404 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.993419 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.993437 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:49 crc kubenswrapper[4953]: I0223 00:07:49.993449 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:49Z","lastTransitionTime":"2026-02-23T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.095625 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.095678 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.095688 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.095708 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.095722 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:50Z","lastTransitionTime":"2026-02-23T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.198000 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.198043 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.198060 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.198077 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.198088 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:50Z","lastTransitionTime":"2026-02-23T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.281547 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 17:34:25.105588435 +0000 UTC Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.301133 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.301178 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.301192 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.301211 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.301227 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:50Z","lastTransitionTime":"2026-02-23T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.325509 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.325635 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:50 crc kubenswrapper[4953]: E0223 00:07:50.325686 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:50 crc kubenswrapper[4953]: E0223 00:07:50.325857 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.326797 4953 scope.go:117] "RemoveContainer" containerID="be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a" Feb 23 00:07:50 crc kubenswrapper[4953]: E0223 00:07:50.327006 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-69mr8_openshift-ovn-kubernetes(5937f1d2-1966-4337-b099-ad0af539fe11)\"" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.403200 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.403229 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.403236 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.403249 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.403258 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:50Z","lastTransitionTime":"2026-02-23T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.505315 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.505356 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.505368 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.505381 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.505390 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:50Z","lastTransitionTime":"2026-02-23T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.608084 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.608157 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.608170 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.608190 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.608203 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:50Z","lastTransitionTime":"2026-02-23T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.710722 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.710800 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.710820 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.710851 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.710871 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:50Z","lastTransitionTime":"2026-02-23T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.814367 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.814421 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.814435 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.814458 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.814472 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:50Z","lastTransitionTime":"2026-02-23T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.916606 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.916653 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.916663 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.916712 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:50 crc kubenswrapper[4953]: I0223 00:07:50.916723 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:50Z","lastTransitionTime":"2026-02-23T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.020367 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.020416 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.020427 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.020446 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.020457 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:51Z","lastTransitionTime":"2026-02-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.122788 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.122822 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.122832 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.122845 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.122854 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:51Z","lastTransitionTime":"2026-02-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.225031 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.225097 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.225114 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.225138 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.225155 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:51Z","lastTransitionTime":"2026-02-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.282383 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 09:11:14.163964213 +0000 UTC Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.326564 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.327011 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:51 crc kubenswrapper[4953]: E0223 00:07:51.327186 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:51 crc kubenswrapper[4953]: E0223 00:07:51.327279 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.328445 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.328505 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.328522 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.328571 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.328588 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:51Z","lastTransitionTime":"2026-02-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.338223 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.431564 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.431600 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.431608 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.431624 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.431636 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:51Z","lastTransitionTime":"2026-02-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.534578 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.534654 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.534677 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.534710 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.534742 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:51Z","lastTransitionTime":"2026-02-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.637151 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.637204 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.637216 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.637235 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.637245 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:51Z","lastTransitionTime":"2026-02-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.654064 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.654126 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.654157 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.654171 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.654182 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:51Z","lastTransitionTime":"2026-02-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:51 crc kubenswrapper[4953]: E0223 00:07:51.671613 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.675970 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.676014 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.676028 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.676046 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.676060 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:51Z","lastTransitionTime":"2026-02-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:51 crc kubenswrapper[4953]: E0223 00:07:51.692806 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.697640 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.697680 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.697689 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.697702 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.697713 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:51Z","lastTransitionTime":"2026-02-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:51 crc kubenswrapper[4953]: E0223 00:07:51.714485 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.718663 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.718695 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.718709 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.718725 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.718737 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:51Z","lastTransitionTime":"2026-02-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:51 crc kubenswrapper[4953]: E0223 00:07:51.731910 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.736082 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.736119 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.736131 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.736146 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.736156 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:51Z","lastTransitionTime":"2026-02-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:51 crc kubenswrapper[4953]: E0223 00:07:51.749985 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:51 crc kubenswrapper[4953]: E0223 00:07:51.750090 4953 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.751830 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.751869 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.751883 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.751899 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.751910 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:51Z","lastTransitionTime":"2026-02-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.854243 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.854276 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.854302 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.854316 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.854325 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:51Z","lastTransitionTime":"2026-02-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.956601 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.956648 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.956659 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.956678 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:51 crc kubenswrapper[4953]: I0223 00:07:51.956689 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:51Z","lastTransitionTime":"2026-02-23T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.059341 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.059392 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.059410 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.059436 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.059453 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:52Z","lastTransitionTime":"2026-02-23T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.109376 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs\") pod \"network-metrics-daemon-wppgs\" (UID: \"71837ac6-9a75-4640-af98-633ccdd09e20\") " pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:52 crc kubenswrapper[4953]: E0223 00:07:52.109532 4953 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 00:07:52 crc kubenswrapper[4953]: E0223 00:07:52.109602 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs podName:71837ac6-9a75-4640-af98-633ccdd09e20 nodeName:}" failed. No retries permitted until 2026-02-23 00:08:24.109584655 +0000 UTC m=+102.043426501 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs") pod "network-metrics-daemon-wppgs" (UID: "71837ac6-9a75-4640-af98-633ccdd09e20") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.161546 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.161591 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.161606 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.161622 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.161631 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:52Z","lastTransitionTime":"2026-02-23T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.264302 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.264363 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.264379 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.264400 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.264415 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:52Z","lastTransitionTime":"2026-02-23T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.283722 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:29:15.293495328 +0000 UTC Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.325812 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.325826 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:52 crc kubenswrapper[4953]: E0223 00:07:52.326042 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:52 crc kubenswrapper[4953]: E0223 00:07:52.326112 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.366696 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.366737 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.366745 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.366758 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.366767 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:52Z","lastTransitionTime":"2026-02-23T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.468784 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.468821 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.468830 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.468846 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.468857 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:52Z","lastTransitionTime":"2026-02-23T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.571294 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.571341 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.571350 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.571362 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.571371 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:52Z","lastTransitionTime":"2026-02-23T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.673294 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.673359 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.673371 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.673388 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.673401 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:52Z","lastTransitionTime":"2026-02-23T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.775808 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.775930 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.775952 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.775979 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.776001 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:52Z","lastTransitionTime":"2026-02-23T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.879129 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.879225 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.879244 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.879276 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.879381 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:52Z","lastTransitionTime":"2026-02-23T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.982452 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.982529 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.982550 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.982587 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:52 crc kubenswrapper[4953]: I0223 00:07:52.982613 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:52Z","lastTransitionTime":"2026-02-23T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.085637 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.085702 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.085718 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.085739 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.085759 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:53Z","lastTransitionTime":"2026-02-23T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.187927 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.187968 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.187979 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.187999 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.188014 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:53Z","lastTransitionTime":"2026-02-23T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.284547 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 16:53:25.384256336 +0000 UTC Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.290122 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.290162 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.290171 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.290186 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.290195 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:53Z","lastTransitionTime":"2026-02-23T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.325265 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.325345 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:53 crc kubenswrapper[4953]: E0223 00:07:53.325379 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:53 crc kubenswrapper[4953]: E0223 00:07:53.325488 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.344990 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de7b12-36af-448d-a1e7-2a7153ee8f48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a2b94bf3e39dc5cf412426c8c8e30a9e085d5a10bcf84ea64d01a6371c86ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6056e78ae465353d500663bf00838ba97b6dcd8ca6e286b5eb2427dbfbfc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b17f060992b08016c65d1faca4eedf4c8a2111b6540143d00dd192d736f0197e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.366476 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.388106 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.393570 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.393604 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.393618 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.393631 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.393641 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:53Z","lastTransitionTime":"2026-02-23T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.401224 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.414389 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc52d4ff-3262-4e91-a698-93c679395293\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e115b920096378945da1bc7b1422541774ffb1329e6ea61f195b4fd06c347a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.435174 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.455496 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.469127 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.484621 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.498105 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.498136 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.498145 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.498159 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.498168 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:53Z","lastTransitionTime":"2026-02-23T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.498768 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.511268 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.524116 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.545641 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.567454 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:36Z\\\",\\\"message\\\":\\\"07:37.762976516 +0000 UTC m=+2.095780407): skip\\\\nI0223 00:07:36.307081 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nF0223 00:07:36.307042 6625 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z]\\\\nI0223 00:07:36.307099 6625 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 126.663µs)\\\\nI0223 00:07:36.307115 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storage-version-mi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-69mr8_openshift-ovn-kubernetes(5937f1d2-1966-4337-b099-ad0af539fe11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.581032 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.600066 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.600107 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.600120 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.600140 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.600154 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:53Z","lastTransitionTime":"2026-02-23T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.600362 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.616050 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.635747 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.702607 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.702683 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.702696 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.702715 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.702726 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:53Z","lastTransitionTime":"2026-02-23T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.804767 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.804846 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.804869 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.804897 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.804921 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:53Z","lastTransitionTime":"2026-02-23T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.907741 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.907783 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.907822 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.907840 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:53 crc kubenswrapper[4953]: I0223 00:07:53.907850 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:53Z","lastTransitionTime":"2026-02-23T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.011182 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.011264 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.011317 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.011350 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.011371 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:54Z","lastTransitionTime":"2026-02-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.114421 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.114503 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.114524 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.114554 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.114576 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:54Z","lastTransitionTime":"2026-02-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.217855 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.217901 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.217914 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.217932 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.217945 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:54Z","lastTransitionTime":"2026-02-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.284787 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 10:48:03.197175075 +0000 UTC Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.320170 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.320230 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.320243 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.320264 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.320325 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:54Z","lastTransitionTime":"2026-02-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.325452 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.325539 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:54 crc kubenswrapper[4953]: E0223 00:07:54.325582 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:54 crc kubenswrapper[4953]: E0223 00:07:54.325734 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.424076 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.424124 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.424135 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.424153 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.424165 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:54Z","lastTransitionTime":"2026-02-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.526993 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.527036 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.527051 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.527068 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.527078 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:54Z","lastTransitionTime":"2026-02-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.630383 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.630435 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.630448 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.630468 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.630482 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:54Z","lastTransitionTime":"2026-02-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.732812 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.732858 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.732869 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.732886 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.732898 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:54Z","lastTransitionTime":"2026-02-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.835177 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.835211 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.835219 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.835232 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.835241 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:54Z","lastTransitionTime":"2026-02-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.938075 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.938145 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.938156 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.938173 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:54 crc kubenswrapper[4953]: I0223 00:07:54.938187 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:54Z","lastTransitionTime":"2026-02-23T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.041711 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.041770 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.041791 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.041814 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.041830 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:55Z","lastTransitionTime":"2026-02-23T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.144008 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.144077 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.144089 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.144109 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.144120 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:55Z","lastTransitionTime":"2026-02-23T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.246216 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.246344 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.246373 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.246404 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.246439 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:55Z","lastTransitionTime":"2026-02-23T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.286083 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 22:44:20.829913168 +0000 UTC Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.325686 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.325816 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:55 crc kubenswrapper[4953]: E0223 00:07:55.325937 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:55 crc kubenswrapper[4953]: E0223 00:07:55.326060 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.348248 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.348338 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.348357 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.348378 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.348390 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:55Z","lastTransitionTime":"2026-02-23T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.451357 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.451476 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.451495 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.451523 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.451545 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:55Z","lastTransitionTime":"2026-02-23T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.554575 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.554654 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.554676 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.554709 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.554730 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:55Z","lastTransitionTime":"2026-02-23T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.657691 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.657740 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.657750 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.657769 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.657782 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:55Z","lastTransitionTime":"2026-02-23T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.760991 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.761038 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.761047 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.761069 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.761080 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:55Z","lastTransitionTime":"2026-02-23T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.848033 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pxzfb_c6ae22b1-a5f9-483a-be3d-32cfb7d516d5/kube-multus/0.log" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.848102 4953 generic.go:334] "Generic (PLEG): container finished" podID="c6ae22b1-a5f9-483a-be3d-32cfb7d516d5" containerID="c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371" exitCode=1 Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.848147 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pxzfb" event={"ID":"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5","Type":"ContainerDied","Data":"c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371"} Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.848628 4953 scope.go:117] "RemoveContainer" containerID="c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.862842 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:55Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.865040 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.865089 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.865105 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.865134 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.865152 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:55Z","lastTransitionTime":"2026-02-23T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.883968 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:55Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.902867 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de7b12-36af-448d-a1e7-2a7153ee8f48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a2b94bf3e39dc5cf412426c8c8e30a9e085d5a10bcf84ea64d01a6371c86ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6056e78ae465353d500663bf00838ba97b6dcd8ca6e286b5eb2427dbfbfc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b17f060992b08016c65d1faca4eedf4c8a2111b6540143d00dd192d736f0197e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:55Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.915604 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:55Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.933261 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:55Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.946689 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:55Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.956734 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc52d4ff-3262-4e91-a698-93c679395293\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e115b920096378945da1bc7b1422541774ffb1329e6ea61f195b4fd06c347a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:55Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.968401 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:55Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.969203 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.969236 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.969248 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.969268 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.969281 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:55Z","lastTransitionTime":"2026-02-23T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.984082 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:55Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:55 crc kubenswrapper[4953]: I0223 00:07:55.993705 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:55Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.006747 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.020148 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.033921 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.048190 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:54Z\\\",\\\"message\\\":\\\"2026-02-23T00:07:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_89eb0e96-4659-4912-a266-f6469898a67e\\\\n2026-02-23T00:07:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_89eb0e96-4659-4912-a266-f6469898a67e to /host/opt/cni/bin/\\\\n2026-02-23T00:07:09Z [verbose] multus-daemon started\\\\n2026-02-23T00:07:09Z [verbose] Readiness Indicator file check\\\\n2026-02-23T00:07:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.065490 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.072047 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.072078 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.072090 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.072111 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.072123 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:56Z","lastTransitionTime":"2026-02-23T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.085350 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:36Z\\\",\\\"message\\\":\\\"07:37.762976516 +0000 UTC m=+2.095780407): skip\\\\nI0223 00:07:36.307081 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nF0223 00:07:36.307042 6625 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z]\\\\nI0223 00:07:36.307099 6625 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 126.663µs)\\\\nI0223 00:07:36.307115 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storage-version-mi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-69mr8_openshift-ovn-kubernetes(5937f1d2-1966-4337-b099-ad0af539fe11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.100226 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.116984 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.175228 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.175341 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.175368 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.175397 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.175421 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:56Z","lastTransitionTime":"2026-02-23T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.277709 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.277741 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.277754 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.277771 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.277782 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:56Z","lastTransitionTime":"2026-02-23T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.286861 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 22:31:56.212267148 +0000 UTC Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.326273 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:56 crc kubenswrapper[4953]: E0223 00:07:56.326443 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.326744 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:56 crc kubenswrapper[4953]: E0223 00:07:56.326840 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.380164 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.380210 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.380223 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.380240 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.380252 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:56Z","lastTransitionTime":"2026-02-23T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.482765 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.482817 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.482832 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.482850 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.482862 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:56Z","lastTransitionTime":"2026-02-23T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.584505 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.584548 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.584563 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.584579 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.584593 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:56Z","lastTransitionTime":"2026-02-23T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.687313 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.687350 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.687360 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.687375 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.687384 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:56Z","lastTransitionTime":"2026-02-23T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.812168 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.812201 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.812211 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.812227 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.812236 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:56Z","lastTransitionTime":"2026-02-23T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.852416 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pxzfb_c6ae22b1-a5f9-483a-be3d-32cfb7d516d5/kube-multus/0.log" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.852473 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pxzfb" event={"ID":"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5","Type":"ContainerStarted","Data":"47d5a3bedeb2d2baf8bec6932b5cad209fb75c63bbca5f89ad0fba12c6c921ce"} Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.862522 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.871524 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.880638 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.895744 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:36Z\\\",\\\"message\\\":\\\"07:37.762976516 +0000 UTC m=+2.095780407): skip\\\\nI0223 00:07:36.307081 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nF0223 00:07:36.307042 6625 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z]\\\\nI0223 00:07:36.307099 6625 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 126.663µs)\\\\nI0223 00:07:36.307115 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storage-version-mi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-69mr8_openshift-ovn-kubernetes(5937f1d2-1966-4337-b099-ad0af539fe11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.905176 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.914559 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.914600 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.914610 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.914627 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.914636 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:56Z","lastTransitionTime":"2026-02-23T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.917399 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.927867 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.938420 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.947590 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.958377 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d5a3bedeb2d2baf8bec6932b5cad209fb75c63bbca5f89ad0fba12c6c921ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:54Z\\\",\\\"message\\\":\\\"2026-02-23T00:07:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_89eb0e96-4659-4912-a266-f6469898a67e\\\\n2026-02-23T00:07:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_89eb0e96-4659-4912-a266-f6469898a67e to /host/opt/cni/bin/\\\\n2026-02-23T00:07:09Z [verbose] multus-daemon started\\\\n2026-02-23T00:07:09Z [verbose] Readiness Indicator file check\\\\n2026-02-23T00:07:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.970472 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.981575 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:56 crc kubenswrapper[4953]: I0223 00:07:56.992127 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.001549 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc52d4ff-3262-4e91-a698-93c679395293\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e115b920096378945da1bc7b1422541774ffb1329e6ea61f195b4fd06c347a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.012179 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de7b12-36af-448d-a1e7-2a7153ee8f48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a2b94bf3e39dc5cf412426c8c8e30a9e085d5a10bcf84ea64d01a6371c86ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6056e78ae465353d500663bf00838ba97b6dcd8ca6e286b5eb2427dbfbfc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b17f060992b08016c65d1faca4eedf4c8a2111b6540143d00dd192d736f0197e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.016574 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.016612 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.016621 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.016637 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.016646 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:57Z","lastTransitionTime":"2026-02-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.022749 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.033716 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.043116 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.118981 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.119014 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.119023 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.119038 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.119048 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:57Z","lastTransitionTime":"2026-02-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.224145 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.224189 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.224201 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.224220 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.224230 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:57Z","lastTransitionTime":"2026-02-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.287945 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 04:36:37.659798343 +0000 UTC Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.325421 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.325475 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:57 crc kubenswrapper[4953]: E0223 00:07:57.325552 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:57 crc kubenswrapper[4953]: E0223 00:07:57.325621 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.327176 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.327242 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.327256 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.327270 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.327282 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:57Z","lastTransitionTime":"2026-02-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.430647 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.430681 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.430692 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.430707 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.430717 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:57Z","lastTransitionTime":"2026-02-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.533003 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.533037 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.533046 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.533058 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.533067 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:57Z","lastTransitionTime":"2026-02-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.634907 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.634945 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.634954 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.634969 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.634978 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:57Z","lastTransitionTime":"2026-02-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.737326 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.737388 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.737400 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.737415 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.737427 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:57Z","lastTransitionTime":"2026-02-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.839735 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.839768 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.839776 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.839788 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.839796 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:57Z","lastTransitionTime":"2026-02-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.941835 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.941885 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.941897 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.941912 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:57 crc kubenswrapper[4953]: I0223 00:07:57.941923 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:57Z","lastTransitionTime":"2026-02-23T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.043672 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.043712 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.043721 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.043737 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.043747 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:58Z","lastTransitionTime":"2026-02-23T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.146107 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.146141 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.146150 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.146163 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.146172 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:58Z","lastTransitionTime":"2026-02-23T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.248360 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.248439 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.248453 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.248472 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.248483 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:58Z","lastTransitionTime":"2026-02-23T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.288142 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:16:59.074450257 +0000 UTC Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.325893 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.325926 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:07:58 crc kubenswrapper[4953]: E0223 00:07:58.326109 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:07:58 crc kubenswrapper[4953]: E0223 00:07:58.326190 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.351226 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.351262 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.351273 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.351306 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.351316 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:58Z","lastTransitionTime":"2026-02-23T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.453407 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.453459 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.453472 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.453510 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.453525 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:58Z","lastTransitionTime":"2026-02-23T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.556431 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.556494 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.556518 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.556549 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.556570 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:58Z","lastTransitionTime":"2026-02-23T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.659527 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.659585 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.659601 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.659625 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.659644 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:58Z","lastTransitionTime":"2026-02-23T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.762187 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.762239 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.762251 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.762267 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.762278 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:58Z","lastTransitionTime":"2026-02-23T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.864715 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.864765 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.864779 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.864797 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.864809 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:58Z","lastTransitionTime":"2026-02-23T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.967003 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.967052 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.967065 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.967082 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:58 crc kubenswrapper[4953]: I0223 00:07:58.967095 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:58Z","lastTransitionTime":"2026-02-23T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.069641 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.069682 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.069690 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.069709 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.069719 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:59Z","lastTransitionTime":"2026-02-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.172973 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.173010 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.173021 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.173062 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.173075 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:59Z","lastTransitionTime":"2026-02-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.275735 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.275766 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.275775 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.275788 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.275797 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:59Z","lastTransitionTime":"2026-02-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.288349 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 17:00:43.850855306 +0000 UTC Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.326093 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.326142 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:07:59 crc kubenswrapper[4953]: E0223 00:07:59.326317 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:07:59 crc kubenswrapper[4953]: E0223 00:07:59.326421 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.378439 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.378516 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.378529 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.378552 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.378569 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:59Z","lastTransitionTime":"2026-02-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.481881 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.481920 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.481931 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.481945 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.481964 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:59Z","lastTransitionTime":"2026-02-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.584381 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.584424 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.584433 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.584449 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.584459 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:59Z","lastTransitionTime":"2026-02-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.686511 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.686610 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.686626 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.686674 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.686689 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:59Z","lastTransitionTime":"2026-02-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.789220 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.789266 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.789308 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.789333 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.789345 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:59Z","lastTransitionTime":"2026-02-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.891323 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.891363 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.891373 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.891389 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.891399 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:59Z","lastTransitionTime":"2026-02-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.993710 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.993766 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.993783 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.993807 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:07:59 crc kubenswrapper[4953]: I0223 00:07:59.993825 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:07:59Z","lastTransitionTime":"2026-02-23T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.096487 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.096521 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.096529 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.096544 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.096555 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:00Z","lastTransitionTime":"2026-02-23T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.199540 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.199576 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.199592 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.199605 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.199614 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:00Z","lastTransitionTime":"2026-02-23T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.289361 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 15:05:18.276388534 +0000 UTC Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.302838 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.302876 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.302887 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.302904 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.302914 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:00Z","lastTransitionTime":"2026-02-23T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.325404 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:00 crc kubenswrapper[4953]: E0223 00:08:00.325572 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.325823 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:00 crc kubenswrapper[4953]: E0223 00:08:00.325920 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.405584 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.405648 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.405663 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.405686 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.405699 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:00Z","lastTransitionTime":"2026-02-23T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.847056 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.847105 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.847115 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.847133 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.847144 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:00Z","lastTransitionTime":"2026-02-23T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.949495 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.949824 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.949936 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.950020 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:00 crc kubenswrapper[4953]: I0223 00:08:00.950101 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:00Z","lastTransitionTime":"2026-02-23T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.052936 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.052989 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.053005 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.053025 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.053039 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:01Z","lastTransitionTime":"2026-02-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.156356 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.156412 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.156422 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.156438 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.156448 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:01Z","lastTransitionTime":"2026-02-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.259038 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.259085 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.259093 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.259107 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.259118 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:01Z","lastTransitionTime":"2026-02-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.290473 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 06:00:10.402719072 +0000 UTC Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.325984 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.326339 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:01 crc kubenswrapper[4953]: E0223 00:08:01.326536 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:01 crc kubenswrapper[4953]: E0223 00:08:01.326668 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.326742 4953 scope.go:117] "RemoveContainer" containerID="be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.361174 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.361203 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.361213 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.361224 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.361233 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:01Z","lastTransitionTime":"2026-02-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.463116 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.463172 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.463189 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.463212 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.463231 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:01Z","lastTransitionTime":"2026-02-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.565666 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.565724 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.565738 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.565756 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.565770 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:01Z","lastTransitionTime":"2026-02-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.668033 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.668095 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.668113 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.668137 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.668153 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:01Z","lastTransitionTime":"2026-02-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.762321 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.762368 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.762381 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.762399 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.762412 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:01Z","lastTransitionTime":"2026-02-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:01 crc kubenswrapper[4953]: E0223 00:08:01.776627 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.780328 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.780430 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.780493 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.780604 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.780664 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:01Z","lastTransitionTime":"2026-02-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:01 crc kubenswrapper[4953]: E0223 00:08:01.802015 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.805721 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.805873 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.805947 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.806069 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.806160 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:01Z","lastTransitionTime":"2026-02-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:01 crc kubenswrapper[4953]: E0223 00:08:01.824769 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.829063 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.829193 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.829260 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.829340 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.829572 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:01Z","lastTransitionTime":"2026-02-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:01 crc kubenswrapper[4953]: E0223 00:08:01.848172 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.853722 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.853768 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.853779 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.853794 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.853810 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:01Z","lastTransitionTime":"2026-02-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:01 crc kubenswrapper[4953]: E0223 00:08:01.875101 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:01 crc kubenswrapper[4953]: E0223 00:08:01.875319 4953 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.876970 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.877023 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.877035 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.877054 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:01 crc kubenswrapper[4953]: I0223 00:08:01.877068 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:01Z","lastTransitionTime":"2026-02-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:01.979846 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:01.979876 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:01.979884 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:01.979897 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:01.979906 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:01Z","lastTransitionTime":"2026-02-23T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.082324 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.082360 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.082369 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.082407 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.082419 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:02Z","lastTransitionTime":"2026-02-23T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.184642 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.184671 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.184681 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.184695 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.184704 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:02Z","lastTransitionTime":"2026-02-23T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.286902 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.286935 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.286965 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.286985 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.286994 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:02Z","lastTransitionTime":"2026-02-23T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.291353 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 07:29:09.572606372 +0000 UTC Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.326361 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.326361 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:02 crc kubenswrapper[4953]: E0223 00:08:02.326567 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:02 crc kubenswrapper[4953]: E0223 00:08:02.326687 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.389760 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.389798 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.389810 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.389829 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.389841 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:02Z","lastTransitionTime":"2026-02-23T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.491875 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.491914 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.491924 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.491942 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.491954 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:02Z","lastTransitionTime":"2026-02-23T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.596264 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.596329 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.596342 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.596362 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.596374 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:02Z","lastTransitionTime":"2026-02-23T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.791250 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.791310 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.791322 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.791340 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.791352 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:02Z","lastTransitionTime":"2026-02-23T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.875048 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovnkube-controller/2.log" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.877577 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerStarted","Data":"1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284"} Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.878466 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.888833 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.893517 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.893549 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.893559 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.893572 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.893580 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:02Z","lastTransitionTime":"2026-02-23T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.902130 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d5a3bedeb2d2baf8bec6932b5cad209fb75c63bbca5f89ad0fba12c6c921ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:54Z\\\",\\\"message\\\":\\\"2026-02-23T00:07:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_89eb0e96-4659-4912-a266-f6469898a67e\\\\n2026-02-23T00:07:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_89eb0e96-4659-4912-a266-f6469898a67e to /host/opt/cni/bin/\\\\n2026-02-23T00:07:09Z [verbose] multus-daemon started\\\\n2026-02-23T00:07:09Z [verbose] Readiness Indicator file check\\\\n2026-02-23T00:07:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.920403 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.938312 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:36Z\\\",\\\"message\\\":\\\"07:37.762976516 +0000 UTC m=+2.095780407): skip\\\\nI0223 00:07:36.307081 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nF0223 00:07:36.307042 6625 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z]\\\\nI0223 00:07:36.307099 6625 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 126.663µs)\\\\nI0223 00:07:36.307115 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storage-version-mi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.950254 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.964370 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.978494 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.990462 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.995674 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.995707 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.995716 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.995731 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:02 crc kubenswrapper[4953]: I0223 00:08:02.995741 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:02Z","lastTransitionTime":"2026-02-23T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.003502 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.013484 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.025007 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.038285 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.050433 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.060553 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc52d4ff-3262-4e91-a698-93c679395293\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e115b920096378945da1bc7b1422541774ffb1329e6ea61f195b4fd06c347a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.072629 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de7b12-36af-448d-a1e7-2a7153ee8f48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a2b94bf3e39dc5cf412426c8c8e30a9e085d5a10bcf84ea64d01a6371c86ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6056e78ae465353d500663bf00838ba97b6dcd8ca6e286b5eb2427dbfbfc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b17f060992b08016c65d1faca4eedf4c8a2111b6540143d00dd192d736f0197e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.123780 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.123821 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.123831 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.123848 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.123858 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:03Z","lastTransitionTime":"2026-02-23T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.125110 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.136544 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.148957 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.225971 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.226003 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.226010 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.226022 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.226031 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:03Z","lastTransitionTime":"2026-02-23T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.291539 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 23:00:53.062405298 +0000 UTC Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.325776 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.325882 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:03 crc kubenswrapper[4953]: E0223 00:08:03.326089 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:03 crc kubenswrapper[4953]: E0223 00:08:03.326198 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.327595 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.327621 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.327629 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.327640 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.327649 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:03Z","lastTransitionTime":"2026-02-23T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.343381 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc52d4ff-3262-4e91-a698-93c679395293\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e115b920096378945da1bc7b1422541774ffb1329e6ea61f195b4fd06c347a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.355614 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de7b12-36af-448d-a1e7-2a7153ee8f48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a2b94bf3e39dc5cf412426c8c8e30a9e085d5a10bcf84ea64d01a6371c86ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6056e78ae465353d500663bf00838ba97b6dcd8ca6e286b5eb2427dbfbfc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b17f060992b08016c65d1faca4eedf4c8a2111b6540143d00dd192d736f0197e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.367457 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.378648 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.390018 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.398917 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.409510 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.420757 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.429583 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.429629 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.429642 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.429661 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.429674 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:03Z","lastTransitionTime":"2026-02-23T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.433662 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.449481 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.459509 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.468448 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.478744 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d5a3bedeb2d2baf8bec6932b5cad209fb75c63bbca5f89ad0fba12c6c921ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:54Z\\\",\\\"message\\\":\\\"2026-02-23T00:07:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_89eb0e96-4659-4912-a266-f6469898a67e\\\\n2026-02-23T00:07:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_89eb0e96-4659-4912-a266-f6469898a67e to /host/opt/cni/bin/\\\\n2026-02-23T00:07:09Z [verbose] multus-daemon started\\\\n2026-02-23T00:07:09Z [verbose] Readiness Indicator file check\\\\n2026-02-23T00:07:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.491457 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.507796 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:36Z\\\",\\\"message\\\":\\\"07:37.762976516 +0000 UTC m=+2.095780407): skip\\\\nI0223 00:07:36.307081 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nF0223 00:07:36.307042 6625 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z]\\\\nI0223 00:07:36.307099 6625 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 126.663µs)\\\\nI0223 00:07:36.307115 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storage-version-mi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.520254 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.533582 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.533620 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.533631 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.533646 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.533657 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:03Z","lastTransitionTime":"2026-02-23T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.535057 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.547640 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.635794 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.635830 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.635840 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.635855 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.635866 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:03Z","lastTransitionTime":"2026-02-23T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.737957 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.737996 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.738010 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.738026 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.738036 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:03Z","lastTransitionTime":"2026-02-23T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.840514 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.840569 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.840583 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.840602 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.840617 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:03Z","lastTransitionTime":"2026-02-23T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.881616 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovnkube-controller/3.log" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.882404 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovnkube-controller/2.log" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.885013 4953 generic.go:334] "Generic (PLEG): container finished" podID="5937f1d2-1966-4337-b099-ad0af539fe11" containerID="1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284" exitCode=1 Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.885048 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerDied","Data":"1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284"} Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.885080 4953 scope.go:117] "RemoveContainer" containerID="be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.885757 4953 scope.go:117] "RemoveContainer" containerID="1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284" Feb 23 00:08:03 crc kubenswrapper[4953]: E0223 00:08:03.885920 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-69mr8_openshift-ovn-kubernetes(5937f1d2-1966-4337-b099-ad0af539fe11)\"" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.901221 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.920315 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.942818 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.942849 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.942859 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.942872 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.942881 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:03Z","lastTransitionTime":"2026-02-23T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.952189 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.968260 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.981381 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:03 crc kubenswrapper[4953]: I0223 00:08:03.992508 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.005187 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.014715 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.027279 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d5a3bedeb2d2baf8bec6932b5cad209fb75c63bbca5f89ad0fba12c6c921ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:54Z\\\",\\\"message\\\":\\\"2026-02-23T00:07:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_89eb0e96-4659-4912-a266-f6469898a67e\\\\n2026-02-23T00:07:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_89eb0e96-4659-4912-a266-f6469898a67e to /host/opt/cni/bin/\\\\n2026-02-23T00:07:09Z [verbose] multus-daemon started\\\\n2026-02-23T00:07:09Z [verbose] Readiness Indicator file check\\\\n2026-02-23T00:07:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.041804 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.045096 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.045138 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.045151 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.045166 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.045177 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:04Z","lastTransitionTime":"2026-02-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.063745 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be39051825d7d01e7eabdc297011bd0a8edf772ed7f60210aa42aa1a53b6e42a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:36Z\\\",\\\"message\\\":\\\"07:37.762976516 +0000 UTC m=+2.095780407): skip\\\\nI0223 00:07:36.307081 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nF0223 00:07:36.307042 6625 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:07:36Z is after 2025-08-24T17:21:41Z]\\\\nI0223 00:07:36.307099 6625 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 126.663µs)\\\\nI0223 00:07:36.307115 6625 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-storage-version-mi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:08:03Z\\\",\\\"message\\\":\\\"r removal\\\\nI0223 00:08:03.208869 7037 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 00:08:03.208883 7037 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 00:08:03.208891 7037 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 00:08:03.208922 7037 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 00:08:03.208911 7037 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0223 00:08:03.208935 7037 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 00:08:03.208892 7037 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 00:08:03.208946 7037 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0223 00:08:03.208970 7037 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0223 00:08:03.208986 7037 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 00:08:03.208987 7037 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 00:08:03.209063 7037 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 00:08:03.209086 7037 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 00:08:03.209124 7037 handler.go:208] Removed *v1.Node event handler 2\\\\nI0223 00:08:03.209141 7037 factory.go:656] Stopping watch factory\\\\nI0223 00:08:03.209159 7037 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.075959 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.084821 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.094203 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc52d4ff-3262-4e91-a698-93c679395293\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e115b920096378945da1bc7b1422541774ffb1329e6ea61f195b4fd06c347a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.103481 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de7b12-36af-448d-a1e7-2a7153ee8f48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a2b94bf3e39dc5cf412426c8c8e30a9e085d5a10bcf84ea64d01a6371c86ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6056e78ae465353d500663bf00838ba97b6dcd8ca6e286b5eb2427dbfbfc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b17f060992b08016c65d1faca4eedf4c8a2111b6540143d00dd192d736f0197e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.114763 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.124927 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.135243 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.148053 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.148090 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.148101 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.148116 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.148127 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:04Z","lastTransitionTime":"2026-02-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.250403 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.250435 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.250444 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.250456 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.250465 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:04Z","lastTransitionTime":"2026-02-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.292305 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 16:33:08.111817013 +0000 UTC Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.325936 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.326018 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:04 crc kubenswrapper[4953]: E0223 00:08:04.326085 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:04 crc kubenswrapper[4953]: E0223 00:08:04.326144 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.353030 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.353068 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.353082 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.353098 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.353112 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:04Z","lastTransitionTime":"2026-02-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.454978 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.455017 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.455028 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.455043 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.455056 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:04Z","lastTransitionTime":"2026-02-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.558567 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.558640 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.558656 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.558680 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.558699 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:04Z","lastTransitionTime":"2026-02-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.661265 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.661325 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.661336 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.661354 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.661366 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:04Z","lastTransitionTime":"2026-02-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.764663 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.764719 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.764741 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.764759 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.764773 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:04Z","lastTransitionTime":"2026-02-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.868426 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.868491 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.868507 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.868532 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.868553 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:04Z","lastTransitionTime":"2026-02-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.889753 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovnkube-controller/3.log" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.893882 4953 scope.go:117] "RemoveContainer" containerID="1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284" Feb 23 00:08:04 crc kubenswrapper[4953]: E0223 00:08:04.894169 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-69mr8_openshift-ovn-kubernetes(5937f1d2-1966-4337-b099-ad0af539fe11)\"" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.913024 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:08:03Z\\\",\\\"message\\\":\\\"r removal\\\\nI0223 00:08:03.208869 7037 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 00:08:03.208883 7037 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 00:08:03.208891 7037 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 00:08:03.208922 7037 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 00:08:03.208911 7037 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0223 00:08:03.208935 7037 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 00:08:03.208892 7037 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 00:08:03.208946 7037 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0223 00:08:03.208970 7037 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0223 00:08:03.208986 7037 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 00:08:03.208987 7037 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 00:08:03.209063 7037 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 00:08:03.209086 7037 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 00:08:03.209124 7037 handler.go:208] Removed *v1.Node event handler 2\\\\nI0223 00:08:03.209141 7037 factory.go:656] Stopping watch factory\\\\nI0223 00:08:03.209159 7037 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:08:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-69mr8_openshift-ovn-kubernetes(5937f1d2-1966-4337-b099-ad0af539fe11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.929605 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.946387 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.961730 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.971924 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.971982 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.972002 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.972030 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.972052 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:04Z","lastTransitionTime":"2026-02-23T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.978175 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:04 crc kubenswrapper[4953]: I0223 00:08:04.989835 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.012955 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d5a3bedeb2d2baf8bec6932b5cad209fb75c63bbca5f89ad0fba12c6c921ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:54Z\\\",\\\"message\\\":\\\"2026-02-23T00:07:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_89eb0e96-4659-4912-a266-f6469898a67e\\\\n2026-02-23T00:07:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_89eb0e96-4659-4912-a266-f6469898a67e to /host/opt/cni/bin/\\\\n2026-02-23T00:07:09Z [verbose] multus-daemon started\\\\n2026-02-23T00:07:09Z [verbose] Readiness Indicator file check\\\\n2026-02-23T00:07:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:05Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.034261 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:05Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.047966 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:05Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.060400 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:05Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.071074 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc52d4ff-3262-4e91-a698-93c679395293\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e115b920096378945da1bc7b1422541774ffb1329e6ea61f195b4fd06c347a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:05Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.074991 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.075036 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.075051 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.075072 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.075087 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:05Z","lastTransitionTime":"2026-02-23T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.083526 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de7b12-36af-448d-a1e7-2a7153ee8f48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a2b94bf3e39dc5cf412426c8c8e30a9e085d5a10bcf84ea64d01a6371c86ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6056e78ae465353d500663bf00838ba97b6dcd8ca6e286b5eb2427dbfbfc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b17f060992b08016c65d1faca4eedf4c8a2111b6540143d00dd192d736f0197e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:05Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.102049 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:05Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.117361 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:05Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.127426 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:05Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.136081 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:05Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.154589 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:05Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.169364 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:05Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.177105 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.177154 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.177167 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.177183 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.177193 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:05Z","lastTransitionTime":"2026-02-23T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.280110 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.280159 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.280171 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.280186 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.280197 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:05Z","lastTransitionTime":"2026-02-23T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.293335 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:28:46.811144711 +0000 UTC Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.326237 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.326266 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:05 crc kubenswrapper[4953]: E0223 00:08:05.326460 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:05 crc kubenswrapper[4953]: E0223 00:08:05.326586 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.382596 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.382636 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.382648 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.382665 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.382677 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:05Z","lastTransitionTime":"2026-02-23T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.486233 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.486345 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.486384 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.486414 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.486441 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:05Z","lastTransitionTime":"2026-02-23T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.589650 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.589685 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.589693 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.589706 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.589716 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:05Z","lastTransitionTime":"2026-02-23T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.693538 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.693609 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.693626 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.693646 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.693663 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:05Z","lastTransitionTime":"2026-02-23T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.796630 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.796670 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.796679 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.796692 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.796702 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:05Z","lastTransitionTime":"2026-02-23T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.898636 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.898678 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.898696 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.898718 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:05 crc kubenswrapper[4953]: I0223 00:08:05.898737 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:05Z","lastTransitionTime":"2026-02-23T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.000857 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.000960 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.000980 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.001017 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.001038 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:06Z","lastTransitionTime":"2026-02-23T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.104417 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.104481 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.104493 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.104518 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.104533 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:06Z","lastTransitionTime":"2026-02-23T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.207612 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.207968 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.207979 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.207998 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.208011 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:06Z","lastTransitionTime":"2026-02-23T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.294373 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 19:58:32.109854776 +0000 UTC Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.310389 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.310418 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.310426 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.310439 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.310449 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:06Z","lastTransitionTime":"2026-02-23T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.325840 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.325961 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:06 crc kubenswrapper[4953]: E0223 00:08:06.326017 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:06 crc kubenswrapper[4953]: E0223 00:08:06.326186 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.412759 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.412814 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.412828 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.412847 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.412861 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:06Z","lastTransitionTime":"2026-02-23T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.517150 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.517230 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.517259 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.517331 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.517363 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:06Z","lastTransitionTime":"2026-02-23T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.620311 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.620419 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.620441 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.620502 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.620521 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:06Z","lastTransitionTime":"2026-02-23T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.723478 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.723571 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.723597 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.723634 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.723653 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:06Z","lastTransitionTime":"2026-02-23T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.827762 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.827844 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.827869 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.827907 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.827932 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:06Z","lastTransitionTime":"2026-02-23T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.930514 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.930569 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.930582 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.930602 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:06 crc kubenswrapper[4953]: I0223 00:08:06.930615 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:06Z","lastTransitionTime":"2026-02-23T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.033677 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.033744 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.033767 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.033855 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.033911 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:07Z","lastTransitionTime":"2026-02-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.137679 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.137743 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.137760 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.137788 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.137806 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:07Z","lastTransitionTime":"2026-02-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.240641 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.240715 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.240734 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.240761 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.240782 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:07Z","lastTransitionTime":"2026-02-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.294762 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:39:08.340023575 +0000 UTC Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.325683 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.325751 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:07 crc kubenswrapper[4953]: E0223 00:08:07.325876 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:07 crc kubenswrapper[4953]: E0223 00:08:07.325957 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.343512 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.343574 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.343592 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.343620 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.343639 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:07Z","lastTransitionTime":"2026-02-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.447279 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.447381 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.447400 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.447428 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.447449 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:07Z","lastTransitionTime":"2026-02-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.550371 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.550436 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.550452 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.550483 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.550503 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:07Z","lastTransitionTime":"2026-02-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.653761 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.653872 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.653895 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.653929 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.653954 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:07Z","lastTransitionTime":"2026-02-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.757512 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.757598 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.757625 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.757663 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.757694 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:07Z","lastTransitionTime":"2026-02-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.861214 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.861283 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.861350 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.861383 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.861409 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:07Z","lastTransitionTime":"2026-02-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.964877 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.964969 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.964997 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.965029 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:07 crc kubenswrapper[4953]: I0223 00:08:07.965055 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:07Z","lastTransitionTime":"2026-02-23T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.069119 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.069184 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.069198 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.069221 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.069235 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:08Z","lastTransitionTime":"2026-02-23T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.173815 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.173897 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.173918 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.173950 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.173972 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:08Z","lastTransitionTime":"2026-02-23T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.278476 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.278555 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.278574 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.278604 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.278627 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:08Z","lastTransitionTime":"2026-02-23T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.295060 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 22:01:07.911564121 +0000 UTC Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.326120 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.326120 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:08 crc kubenswrapper[4953]: E0223 00:08:08.326490 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:08 crc kubenswrapper[4953]: E0223 00:08:08.326702 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.382534 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.382624 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.382643 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.382672 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.382692 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:08Z","lastTransitionTime":"2026-02-23T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.485863 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.485930 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.485951 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.485977 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.485999 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:08Z","lastTransitionTime":"2026-02-23T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.590674 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.590765 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.590788 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.590826 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.590851 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:08Z","lastTransitionTime":"2026-02-23T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.693867 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.693930 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.693949 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.693976 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.693995 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:08Z","lastTransitionTime":"2026-02-23T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.797851 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.797929 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.797951 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.797982 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.798003 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:08Z","lastTransitionTime":"2026-02-23T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.902247 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.902352 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.902371 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.902398 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:08 crc kubenswrapper[4953]: I0223 00:08:08.902425 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:08Z","lastTransitionTime":"2026-02-23T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.006372 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.006451 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.006471 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.006503 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.006527 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:09Z","lastTransitionTime":"2026-02-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.110057 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.110108 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.110122 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.110140 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.110152 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:09Z","lastTransitionTime":"2026-02-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.198745 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.199014 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.199079 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.199128 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.199178 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:09 crc kubenswrapper[4953]: E0223 00:08:09.199426 4953 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 00:08:09 crc kubenswrapper[4953]: E0223 00:08:09.199524 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 00:09:13.19949484 +0000 UTC m=+151.133336726 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 00:08:09 crc kubenswrapper[4953]: E0223 00:08:09.199560 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 00:08:09 crc kubenswrapper[4953]: E0223 00:08:09.199611 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 00:08:09 crc kubenswrapper[4953]: E0223 00:08:09.199643 4953 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:08:09 crc kubenswrapper[4953]: E0223 00:08:09.199723 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 00:09:13.199696465 +0000 UTC m=+151.133538351 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:08:09 crc kubenswrapper[4953]: E0223 00:08:09.199854 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:13.199833039 +0000 UTC m=+151.133674935 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:08:09 crc kubenswrapper[4953]: E0223 00:08:09.199985 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 00:08:09 crc kubenswrapper[4953]: E0223 00:08:09.200011 4953 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 00:08:09 crc kubenswrapper[4953]: E0223 00:08:09.200018 4953 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 00:08:09 crc kubenswrapper[4953]: E0223 00:08:09.200044 4953 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:08:09 crc kubenswrapper[4953]: E0223 00:08:09.200066 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 00:09:13.200048284 +0000 UTC m=+151.133890170 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 00:08:09 crc kubenswrapper[4953]: E0223 00:08:09.200106 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 00:09:13.200085475 +0000 UTC m=+151.133927371 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.214023 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.214093 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.214117 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.214154 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.214180 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:09Z","lastTransitionTime":"2026-02-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.295946 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 10:33:40.695059424 +0000 UTC Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.317101 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.317157 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.317171 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.317197 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.317212 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:09Z","lastTransitionTime":"2026-02-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.326562 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:09 crc kubenswrapper[4953]: E0223 00:08:09.326719 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.326795 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:09 crc kubenswrapper[4953]: E0223 00:08:09.326995 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.420020 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.420065 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.420074 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.420091 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.420101 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:09Z","lastTransitionTime":"2026-02-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.524361 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.524421 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.524432 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.524459 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.524472 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:09Z","lastTransitionTime":"2026-02-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.627958 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.628032 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.628073 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.628119 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.628147 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:09Z","lastTransitionTime":"2026-02-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.731660 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.731746 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.731770 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.731803 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.731828 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:09Z","lastTransitionTime":"2026-02-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.834565 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.834643 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.834664 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.834694 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.834714 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:09Z","lastTransitionTime":"2026-02-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.936982 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.937020 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.937030 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.937047 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:09 crc kubenswrapper[4953]: I0223 00:08:09.937057 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:09Z","lastTransitionTime":"2026-02-23T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.039773 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.039821 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.039834 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.039854 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.039869 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:10Z","lastTransitionTime":"2026-02-23T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.142338 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.142385 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.142395 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.142407 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.142417 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:10Z","lastTransitionTime":"2026-02-23T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.245027 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.245079 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.245093 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.245142 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.245160 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:10Z","lastTransitionTime":"2026-02-23T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.297182 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 18:10:08.525650244 +0000 UTC Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.326022 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.326023 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:10 crc kubenswrapper[4953]: E0223 00:08:10.326663 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:10 crc kubenswrapper[4953]: E0223 00:08:10.326831 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.348176 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.348221 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.348238 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.348261 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.348279 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:10Z","lastTransitionTime":"2026-02-23T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.451995 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.452071 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.452089 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.452120 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.452137 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:10Z","lastTransitionTime":"2026-02-23T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.556837 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.556894 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.556913 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.556935 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.556953 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:10Z","lastTransitionTime":"2026-02-23T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.660437 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.660522 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.660547 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.660578 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.660601 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:10Z","lastTransitionTime":"2026-02-23T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.764044 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.764122 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.764142 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.764174 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.764198 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:10Z","lastTransitionTime":"2026-02-23T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.867722 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.867758 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.867767 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.867781 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.867789 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:10Z","lastTransitionTime":"2026-02-23T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.977536 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.977630 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.977652 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.977685 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:10 crc kubenswrapper[4953]: I0223 00:08:10.977709 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:10Z","lastTransitionTime":"2026-02-23T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.081017 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.081078 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.081090 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.081111 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.081125 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:11Z","lastTransitionTime":"2026-02-23T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.185358 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.185412 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.185425 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.185450 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.185463 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:11Z","lastTransitionTime":"2026-02-23T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.288225 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.288374 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.288397 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.288428 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.288451 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:11Z","lastTransitionTime":"2026-02-23T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.297655 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 11:09:37.458608873 +0000 UTC Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.326127 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.326262 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:11 crc kubenswrapper[4953]: E0223 00:08:11.326420 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:11 crc kubenswrapper[4953]: E0223 00:08:11.326548 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.392276 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.392397 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.392417 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.392449 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.392470 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:11Z","lastTransitionTime":"2026-02-23T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.496392 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.496462 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.496498 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.496545 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.496596 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:11Z","lastTransitionTime":"2026-02-23T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.599012 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.599063 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.599079 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.599099 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.599118 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:11Z","lastTransitionTime":"2026-02-23T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.701687 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.701737 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.701748 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.701764 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.701774 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:11Z","lastTransitionTime":"2026-02-23T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.803728 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.803796 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.803814 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.803843 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.803863 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:11Z","lastTransitionTime":"2026-02-23T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.907170 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.907228 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.907248 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.907400 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:11 crc kubenswrapper[4953]: I0223 00:08:11.907421 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:11Z","lastTransitionTime":"2026-02-23T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.009586 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.009635 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.009656 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.009684 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.009702 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:12Z","lastTransitionTime":"2026-02-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.113034 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.113090 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.113102 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.113121 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.113133 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:12Z","lastTransitionTime":"2026-02-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.217094 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.217182 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.217207 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.217248 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.217274 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:12Z","lastTransitionTime":"2026-02-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.223931 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.224000 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.224023 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.224050 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.224071 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:12Z","lastTransitionTime":"2026-02-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:12 crc kubenswrapper[4953]: E0223 00:08:12.246339 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.251020 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.251073 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.251083 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.251103 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.251117 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:12Z","lastTransitionTime":"2026-02-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:12 crc kubenswrapper[4953]: E0223 00:08:12.270242 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.275679 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.275758 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.275771 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.275792 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.275805 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:12Z","lastTransitionTime":"2026-02-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:12 crc kubenswrapper[4953]: E0223 00:08:12.295628 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.298179 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:54:20.444535471 +0000 UTC Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.300435 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.300504 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.300522 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.300549 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.300567 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:12Z","lastTransitionTime":"2026-02-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:12 crc kubenswrapper[4953]: E0223 00:08:12.321098 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.325363 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.325446 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:12 crc kubenswrapper[4953]: E0223 00:08:12.325525 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:12 crc kubenswrapper[4953]: E0223 00:08:12.325646 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.326671 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.326723 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.326743 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.326771 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.326792 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:12Z","lastTransitionTime":"2026-02-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:12 crc kubenswrapper[4953]: E0223 00:08:12.347375 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:12 crc kubenswrapper[4953]: E0223 00:08:12.347638 4953 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.349405 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.349464 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.349485 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.349514 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.349535 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:12Z","lastTransitionTime":"2026-02-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.452588 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.452628 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.452640 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.452656 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.452665 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:12Z","lastTransitionTime":"2026-02-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.556245 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.556381 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.556403 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.556434 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.556456 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:12Z","lastTransitionTime":"2026-02-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.660014 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.660081 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.660104 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.660142 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.660169 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:12Z","lastTransitionTime":"2026-02-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.772221 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.772307 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.772323 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.772345 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.772362 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:12Z","lastTransitionTime":"2026-02-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.875644 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.875763 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.875791 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.875826 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.875849 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:12Z","lastTransitionTime":"2026-02-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.979482 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.979597 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.979618 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.979652 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:12 crc kubenswrapper[4953]: I0223 00:08:12.979676 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:12Z","lastTransitionTime":"2026-02-23T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.083188 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.083246 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.083255 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.083272 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.083282 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:13Z","lastTransitionTime":"2026-02-23T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.188105 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.188174 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.188189 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.188215 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.188234 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:13Z","lastTransitionTime":"2026-02-23T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.292344 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.292389 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.292398 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.292414 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.292425 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:13Z","lastTransitionTime":"2026-02-23T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.298474 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 15:22:31.034470655 +0000 UTC Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.326179 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.326284 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:13 crc kubenswrapper[4953]: E0223 00:08:13.326550 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:13 crc kubenswrapper[4953]: E0223 00:08:13.326824 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.349221 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de7b12-36af-448d-a1e7-2a7153ee8f48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a2b94bf3e39dc5cf412426c8c8e30a9e085d5a10bcf84ea64d01a6371c86ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6056e78ae465353d500663bf00838ba97b6dcd8ca6e286b5eb2427dbfbfc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b17f060992b08016c65d1faca4eedf4c8a2111b6540143d00dd192d736f0197e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.373052 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.396903 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.397008 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.397025 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.397054 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.397074 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:13Z","lastTransitionTime":"2026-02-23T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.397704 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.418963 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.438265 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc52d4ff-3262-4e91-a698-93c679395293\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e115b920096378945da1bc7b1422541774ffb1329e6ea61f195b4fd06c347a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.460355 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.483997 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.500708 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.501487 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.501542 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.501561 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.501591 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.501610 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:13Z","lastTransitionTime":"2026-02-23T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.517962 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.533545 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.550188 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.567479 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d5a3bedeb2d2baf8bec6932b5cad209fb75c63bbca5f89ad0fba12c6c921ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:54Z\\\",\\\"message\\\":\\\"2026-02-23T00:07:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_89eb0e96-4659-4912-a266-f6469898a67e\\\\n2026-02-23T00:07:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_89eb0e96-4659-4912-a266-f6469898a67e to /host/opt/cni/bin/\\\\n2026-02-23T00:07:09Z [verbose] multus-daemon started\\\\n2026-02-23T00:07:09Z [verbose] Readiness Indicator file check\\\\n2026-02-23T00:07:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.587993 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.606794 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.607217 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.607329 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.607438 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.607506 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:13Z","lastTransitionTime":"2026-02-23T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.608825 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:08:03Z\\\",\\\"message\\\":\\\"r removal\\\\nI0223 00:08:03.208869 7037 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 00:08:03.208883 7037 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 00:08:03.208891 7037 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 00:08:03.208922 7037 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 00:08:03.208911 7037 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0223 00:08:03.208935 7037 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 00:08:03.208892 7037 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 00:08:03.208946 7037 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0223 00:08:03.208970 7037 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0223 00:08:03.208986 7037 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 00:08:03.208987 7037 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 00:08:03.209063 7037 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 00:08:03.209086 7037 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 00:08:03.209124 7037 handler.go:208] Removed *v1.Node event handler 2\\\\nI0223 00:08:03.209141 7037 factory.go:656] Stopping watch factory\\\\nI0223 00:08:03.209159 7037 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:08:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-69mr8_openshift-ovn-kubernetes(5937f1d2-1966-4337-b099-ad0af539fe11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.625315 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.643011 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.656191 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.669428 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.710418 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.710458 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.710469 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.710488 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.710499 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:13Z","lastTransitionTime":"2026-02-23T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.820535 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.820641 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.820662 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.820696 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.820720 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:13Z","lastTransitionTime":"2026-02-23T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.922728 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.922783 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.922798 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.922819 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:13 crc kubenswrapper[4953]: I0223 00:08:13.922834 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:13Z","lastTransitionTime":"2026-02-23T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.025267 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.025347 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.025360 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.025377 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.025389 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:14Z","lastTransitionTime":"2026-02-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.128556 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.128596 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.128607 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.128622 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.128633 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:14Z","lastTransitionTime":"2026-02-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.231764 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.231818 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.231835 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.231859 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.231876 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:14Z","lastTransitionTime":"2026-02-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.299355 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 00:17:08.186887722 +0000 UTC Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.326147 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:14 crc kubenswrapper[4953]: E0223 00:08:14.326371 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.326178 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:14 crc kubenswrapper[4953]: E0223 00:08:14.326992 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.334922 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.335059 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.335125 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.335190 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.335250 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:14Z","lastTransitionTime":"2026-02-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.438161 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.439072 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.439182 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.439273 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.439386 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:14Z","lastTransitionTime":"2026-02-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.543187 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.543616 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.543761 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.543934 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.544066 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:14Z","lastTransitionTime":"2026-02-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.647627 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.647693 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.647711 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.647737 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.647754 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:14Z","lastTransitionTime":"2026-02-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.750636 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.750726 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.750748 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.750775 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.750836 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:14Z","lastTransitionTime":"2026-02-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.853850 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.854157 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.854242 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.854426 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.854483 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:14Z","lastTransitionTime":"2026-02-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.956958 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.957036 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.957057 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.957086 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:14 crc kubenswrapper[4953]: I0223 00:08:14.957103 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:14Z","lastTransitionTime":"2026-02-23T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.059764 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.059808 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.059828 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.059845 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.059856 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:15Z","lastTransitionTime":"2026-02-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.162864 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.162908 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.162931 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.162953 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.162968 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:15Z","lastTransitionTime":"2026-02-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.266096 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.266151 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.266160 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.266178 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.266190 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:15Z","lastTransitionTime":"2026-02-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.300525 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 15:01:35.088239652 +0000 UTC Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.326172 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.326219 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:15 crc kubenswrapper[4953]: E0223 00:08:15.326350 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:15 crc kubenswrapper[4953]: E0223 00:08:15.326568 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.368474 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.368521 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.368532 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.368549 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.368560 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:15Z","lastTransitionTime":"2026-02-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.471083 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.471124 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.471135 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.471152 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.471162 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:15Z","lastTransitionTime":"2026-02-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.573263 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.573390 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.573414 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.573441 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.573460 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:15Z","lastTransitionTime":"2026-02-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.676419 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.676466 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.676478 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.676494 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.676506 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:15Z","lastTransitionTime":"2026-02-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.778619 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.778667 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.778682 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.778701 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.778712 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:15Z","lastTransitionTime":"2026-02-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.881528 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.881563 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.881581 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.881601 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.881616 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:15Z","lastTransitionTime":"2026-02-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.984068 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.984156 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.984187 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.984221 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:15 crc kubenswrapper[4953]: I0223 00:08:15.984244 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:15Z","lastTransitionTime":"2026-02-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.088195 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.088234 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.088243 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.088256 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.088265 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:16Z","lastTransitionTime":"2026-02-23T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.191360 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.191402 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.191415 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.191430 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.191441 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:16Z","lastTransitionTime":"2026-02-23T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.294473 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.294505 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.294512 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.294525 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.294534 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:16Z","lastTransitionTime":"2026-02-23T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.301205 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:52:15.702961845 +0000 UTC Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.325676 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.325720 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:16 crc kubenswrapper[4953]: E0223 00:08:16.325880 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:16 crc kubenswrapper[4953]: E0223 00:08:16.326009 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.397244 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.397495 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.397571 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.397665 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.397747 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:16Z","lastTransitionTime":"2026-02-23T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.502075 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.502151 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.502173 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.502199 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.502227 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:16Z","lastTransitionTime":"2026-02-23T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.605908 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.605946 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.605955 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.605968 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.605976 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:16Z","lastTransitionTime":"2026-02-23T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.708392 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.708427 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.708438 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.708454 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.708465 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:16Z","lastTransitionTime":"2026-02-23T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.810938 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.810982 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.810991 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.811003 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.811014 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:16Z","lastTransitionTime":"2026-02-23T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.913496 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.913544 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.913559 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.913577 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:16 crc kubenswrapper[4953]: I0223 00:08:16.913622 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:16Z","lastTransitionTime":"2026-02-23T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.016184 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.016231 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.016244 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.016262 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.016274 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:17Z","lastTransitionTime":"2026-02-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.119328 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.119386 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.119405 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.119430 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.119448 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:17Z","lastTransitionTime":"2026-02-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.223073 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.223137 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.223156 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.223181 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.223199 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:17Z","lastTransitionTime":"2026-02-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.302166 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:34:48.151069448 +0000 UTC Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.325583 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.325596 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:17 crc kubenswrapper[4953]: E0223 00:08:17.326176 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:17 crc kubenswrapper[4953]: E0223 00:08:17.326625 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.327844 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.327898 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.327915 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.327941 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.327969 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:17Z","lastTransitionTime":"2026-02-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.431250 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.431360 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.431381 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.431405 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.431423 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:17Z","lastTransitionTime":"2026-02-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.535071 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.535122 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.535136 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.535155 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.535171 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:17Z","lastTransitionTime":"2026-02-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.637844 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.637895 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.637904 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.637918 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.637927 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:17Z","lastTransitionTime":"2026-02-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.739991 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.740021 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.740029 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.740042 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.740051 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:17Z","lastTransitionTime":"2026-02-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.842672 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.842718 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.842731 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.842745 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.842756 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:17Z","lastTransitionTime":"2026-02-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.944694 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.944737 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.944748 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.944763 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:17 crc kubenswrapper[4953]: I0223 00:08:17.944775 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:17Z","lastTransitionTime":"2026-02-23T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.047608 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.047662 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.047675 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.047692 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.047703 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:18Z","lastTransitionTime":"2026-02-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.150250 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.150338 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.150352 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.150367 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.150380 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:18Z","lastTransitionTime":"2026-02-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.252676 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.252763 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.252781 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.252805 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.252825 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:18Z","lastTransitionTime":"2026-02-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.303283 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:01:50.240177351 +0000 UTC Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.326059 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.326083 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:18 crc kubenswrapper[4953]: E0223 00:08:18.326583 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:18 crc kubenswrapper[4953]: E0223 00:08:18.326739 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.328663 4953 scope.go:117] "RemoveContainer" containerID="1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284" Feb 23 00:08:18 crc kubenswrapper[4953]: E0223 00:08:18.328971 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-69mr8_openshift-ovn-kubernetes(5937f1d2-1966-4337-b099-ad0af539fe11)\"" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.356355 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.356443 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.356465 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.356493 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.356511 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:18Z","lastTransitionTime":"2026-02-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.459190 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.459243 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.459257 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.459275 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.459327 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:18Z","lastTransitionTime":"2026-02-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.566477 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.566536 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.566546 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.566565 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.566576 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:18Z","lastTransitionTime":"2026-02-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.669730 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.669772 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.669783 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.669798 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.669810 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:18Z","lastTransitionTime":"2026-02-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.772390 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.772415 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.772423 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.772435 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.772444 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:18Z","lastTransitionTime":"2026-02-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.875365 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.875417 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.875429 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.875448 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.875463 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:18Z","lastTransitionTime":"2026-02-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.981656 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.981730 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.981746 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.981767 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:18 crc kubenswrapper[4953]: I0223 00:08:18.981786 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:18Z","lastTransitionTime":"2026-02-23T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.083848 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.083892 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.083903 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.083919 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.083930 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:19Z","lastTransitionTime":"2026-02-23T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.187625 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.187672 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.187682 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.187697 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.187708 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:19Z","lastTransitionTime":"2026-02-23T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.290863 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.290901 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.290912 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.290928 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.290942 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:19Z","lastTransitionTime":"2026-02-23T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.304358 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 21:11:47.768172671 +0000 UTC Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.326099 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.326151 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:19 crc kubenswrapper[4953]: E0223 00:08:19.326241 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:19 crc kubenswrapper[4953]: E0223 00:08:19.326334 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.394024 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.394070 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.394078 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.394093 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.394102 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:19Z","lastTransitionTime":"2026-02-23T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.496609 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.496650 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.496661 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.496679 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.496691 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:19Z","lastTransitionTime":"2026-02-23T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.599835 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.599891 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.599910 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.599933 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.599952 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:19Z","lastTransitionTime":"2026-02-23T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.702561 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.702660 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.702685 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.702715 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.702742 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:19Z","lastTransitionTime":"2026-02-23T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.805963 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.806013 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.806026 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.806046 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.806058 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:19Z","lastTransitionTime":"2026-02-23T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.909051 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.909119 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.909145 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.909178 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:19 crc kubenswrapper[4953]: I0223 00:08:19.909204 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:19Z","lastTransitionTime":"2026-02-23T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.011634 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.011672 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.011683 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.011698 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.011707 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:20Z","lastTransitionTime":"2026-02-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.115504 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.115536 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.115544 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.115556 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.115566 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:20Z","lastTransitionTime":"2026-02-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.218100 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.218138 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.218147 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.218161 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.218171 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:20Z","lastTransitionTime":"2026-02-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.304591 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:40:38.0154969 +0000 UTC Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.321806 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.321869 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.321887 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.321911 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.321928 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:20Z","lastTransitionTime":"2026-02-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.326025 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.326076 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:20 crc kubenswrapper[4953]: E0223 00:08:20.326151 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:20 crc kubenswrapper[4953]: E0223 00:08:20.326214 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.426716 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.426798 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.427088 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.427132 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.427154 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:20Z","lastTransitionTime":"2026-02-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.530897 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.530964 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.530980 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.531005 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.531023 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:20Z","lastTransitionTime":"2026-02-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.633632 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.633696 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.633716 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.633737 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.633752 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:20Z","lastTransitionTime":"2026-02-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.735933 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.736022 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.736047 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.736073 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.736089 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:20Z","lastTransitionTime":"2026-02-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.839389 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.839491 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.839516 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.839552 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.839575 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:20Z","lastTransitionTime":"2026-02-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.942485 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.942620 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.942652 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.942681 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:20 crc kubenswrapper[4953]: I0223 00:08:20.942703 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:20Z","lastTransitionTime":"2026-02-23T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.046391 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.046455 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.046477 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.046506 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.046530 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:21Z","lastTransitionTime":"2026-02-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.149441 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.149501 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.149526 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.149553 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.149574 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:21Z","lastTransitionTime":"2026-02-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.253259 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.253372 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.253393 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.253418 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.253437 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:21Z","lastTransitionTime":"2026-02-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.305378 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 08:22:35.870097756 +0000 UTC Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.325868 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:21 crc kubenswrapper[4953]: E0223 00:08:21.326044 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.325880 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:21 crc kubenswrapper[4953]: E0223 00:08:21.326383 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.357408 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.357486 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.357509 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.357541 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.357565 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:21Z","lastTransitionTime":"2026-02-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.460088 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.460151 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.460164 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.460183 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.460224 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:21Z","lastTransitionTime":"2026-02-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.562926 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.562986 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.563006 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.563030 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.563048 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:21Z","lastTransitionTime":"2026-02-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.665900 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.665962 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.665979 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.666003 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.666023 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:21Z","lastTransitionTime":"2026-02-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.769460 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.769524 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.769536 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.769562 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.769578 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:21Z","lastTransitionTime":"2026-02-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.872905 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.872971 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.872994 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.873023 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.873044 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:21Z","lastTransitionTime":"2026-02-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.976343 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.976416 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.976436 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.976464 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:21 crc kubenswrapper[4953]: I0223 00:08:21.976484 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:21Z","lastTransitionTime":"2026-02-23T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.079940 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.079994 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.080006 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.080027 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.080043 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:22Z","lastTransitionTime":"2026-02-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.182827 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.182904 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.182929 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.182964 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.182989 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:22Z","lastTransitionTime":"2026-02-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.286079 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.286200 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.286223 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.286248 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.286265 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:22Z","lastTransitionTime":"2026-02-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.305753 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 07:47:19.112963588 +0000 UTC Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.326321 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.326422 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:22 crc kubenswrapper[4953]: E0223 00:08:22.326566 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:22 crc kubenswrapper[4953]: E0223 00:08:22.326781 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.389977 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.390042 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.390063 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.390086 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.390103 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:22Z","lastTransitionTime":"2026-02-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.492629 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.492681 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.492696 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.492715 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.492729 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:22Z","lastTransitionTime":"2026-02-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.595963 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.596030 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.596046 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.596069 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.596092 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:22Z","lastTransitionTime":"2026-02-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.692613 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.692673 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.692695 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.692725 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.692747 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:22Z","lastTransitionTime":"2026-02-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:22 crc kubenswrapper[4953]: E0223 00:08:22.712176 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:22Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.717864 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.717910 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.717919 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.717933 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.717942 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:22Z","lastTransitionTime":"2026-02-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:22 crc kubenswrapper[4953]: E0223 00:08:22.738969 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:22Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.744657 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.744718 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.744736 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.744762 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.744784 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:22Z","lastTransitionTime":"2026-02-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:22 crc kubenswrapper[4953]: E0223 00:08:22.776041 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:22Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.781205 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.781243 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.781260 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.781312 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.781329 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:22Z","lastTransitionTime":"2026-02-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:22 crc kubenswrapper[4953]: E0223 00:08:22.801601 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:22Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.806741 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.806784 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.806800 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.806822 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.806836 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:22Z","lastTransitionTime":"2026-02-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:22 crc kubenswrapper[4953]: E0223 00:08:22.825703 4953 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T00:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"533c54a2-4b2a-486b-84ff-79539bb86284\\\",\\\"systemUUID\\\":\\\"eabb6e58-7c3d-4135-8dce-11f0c13836c2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:22Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:22 crc kubenswrapper[4953]: E0223 00:08:22.825960 4953 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.827897 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.827968 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.827993 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.828023 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.828045 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:22Z","lastTransitionTime":"2026-02-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.931035 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.931088 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.931111 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.931139 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:22 crc kubenswrapper[4953]: I0223 00:08:22.931159 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:22Z","lastTransitionTime":"2026-02-23T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.033824 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.033908 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.033932 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.033988 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.034016 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:23Z","lastTransitionTime":"2026-02-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.136924 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.137066 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.137093 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.137123 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.137146 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:23Z","lastTransitionTime":"2026-02-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.239923 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.240010 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.240029 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.240061 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.240082 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:23Z","lastTransitionTime":"2026-02-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.306856 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 10:03:28.237407653 +0000 UTC Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.325490 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:23 crc kubenswrapper[4953]: E0223 00:08:23.325651 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.325670 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:23 crc kubenswrapper[4953]: E0223 00:08:23.325873 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.343347 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.343416 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.343436 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.343561 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.343672 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:23Z","lastTransitionTime":"2026-02-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.345253 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.362231 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-sqwrp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cae958ea-e590-41fb-a965-b4d17d18002c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://866ecbe2a6d09238bbb05b41cb7b10bcc981fd4a7fc39e2d587feab98668fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5x59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-sqwrp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.383865 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pxzfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d5a3bedeb2d2baf8bec6932b5cad209fb75c63bbca5f89ad0fba12c6c921ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:07:54Z\\\",\\\"message\\\":\\\"2026-02-23T00:07:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_89eb0e96-4659-4912-a266-f6469898a67e\\\\n2026-02-23T00:07:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_89eb0e96-4659-4912-a266-f6469898a67e to /host/opt/cni/bin/\\\\n2026-02-23T00:07:09Z [verbose] multus-daemon started\\\\n2026-02-23T00:07:09Z [verbose] Readiness Indicator file check\\\\n2026-02-23T00:07:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w46n9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pxzfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.408308 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59193de-17ea-458e-9569-6881173e66e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff2fb7add6ee6668146236c02c64262460bb43e0a8631f713565b323ea1b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://570afa3a6fa38b34e64f793a9472ed386df1b41f5db246be8860188811150442\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://769dcf2aa61400eafd86cc15673795d612b6b85066c1a85e85bc36bc7651828e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17aae740fe112f9974d2a514a5463b3c347a4996d3802044b6fb4d13080f2705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44ba2fcc44f1c27c0ee6c6074f24388c8e4d735773ee33b186b4ffa66780491e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe6d03c1b0130664596d2cca12958729b66609ade2a402556ae2dc60e16da0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf14694a90a25ae88f471f18a31f87f01b796da8116a7be7e1f5e10376a81a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c67r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dw5dv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.433933 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5937f1d2-1966-4337-b099-ad0af539fe11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T00:08:03Z\\\",\\\"message\\\":\\\"r removal\\\\nI0223 00:08:03.208869 7037 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 00:08:03.208883 7037 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 00:08:03.208891 7037 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 00:08:03.208922 7037 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 00:08:03.208911 7037 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0223 00:08:03.208935 7037 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 00:08:03.208892 7037 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 00:08:03.208946 7037 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0223 00:08:03.208970 7037 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0223 00:08:03.208986 7037 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 00:08:03.208987 7037 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 00:08:03.209063 7037 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 00:08:03.209086 7037 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 00:08:03.209124 7037 handler.go:208] Removed *v1.Node event handler 2\\\\nI0223 00:08:03.209141 7037 factory.go:656] Stopping watch factory\\\\nI0223 00:08:03.209159 7037 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:08:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-69mr8_openshift-ovn-kubernetes(5937f1d2-1966-4337-b099-ad0af539fe11)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x88gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69mr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.447386 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.447455 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.447473 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.447498 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.447515 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:23Z","lastTransitionTime":"2026-02-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.452825 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wppgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71837ac6-9a75-4640-af98-633ccdd09e20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxbvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wppgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.477645 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb70ef98-98d6-4d13-960f-eafb9095d015\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T00:07:04Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 00:07:04.231774 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 00:07:04.232054 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 00:07:04.233042 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3110229740/tls.crt::/tmp/serving-cert-3110229740/tls.key\\\\\\\"\\\\nI0223 00:07:04.937211 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 00:07:04.943254 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 00:07:04.943270 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 00:07:04.943306 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 00:07:04.943311 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 00:07:04.955721 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0223 00:07:04.955806 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955833 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 00:07:04.955885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 00:07:04.955931 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 00:07:04.955954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0223 00:07:04.955747 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 00:07:04.956028 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 00:07:04.958653 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.500006 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dc1d00-c8a7-4bb6-b6c5-0ad3f2b02913\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6796aabb1d9bc2246f75fa7b84ae14700ec19439176406e677af835c78f9511e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b80b4c6343b1aeaff5cd0397f6b7fefef84abfcde28a2eaae7e6a34256dc13\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc048f03b4f4a081522ce93f3d798642c5c4410c2d9458f1afe8e4d211d5570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.520856 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.533935 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca7afa5-1274-436d-ab61-9e8796e4774c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b89a6bd6e5623a341dabcad953186ede946295dd1b1d1747d1578c20eb11418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpl86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.547234 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de7b12-36af-448d-a1e7-2a7153ee8f48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27a2b94bf3e39dc5cf412426c8c8e30a9e085d5a10bcf84ea64d01a6371c86ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6056e78ae465353d500663bf00838ba97b6dcd8ca6e286b5eb2427dbfbfc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b17f060992b08016c65d1faca4eedf4c8a2111b6540143d00dd192d736f0197e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b00ceead9def22640b741810ae2aa4e34dc8cc70c23fd2514ebd0a31918d48ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.549807 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.549840 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.549851 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.549868 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.549881 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:23Z","lastTransitionTime":"2026-02-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.565557 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8f994e100b09c4ccbe42ecd773d707788543efd8f5cac1b9bc75a03819c72f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.583748 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407e5a806defd17d2731f0f5462d0ef2bb1822cf168172fc3724fbaffb0dbdce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a222672d8864a76706d2e23e705af0b70d81c9caa4c4cf52965970018c75f65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.598104 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f40483cd-6612-4eee-83e1-2a0972311b26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fb6baac5d4b9e77dfb09183b7479ea9aea68dd7d08acc23e34046baa8903341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fef75cdd610fcac5239d257ae46ecd1ed6eb11f0b367f08f2c98d8d1a1e1063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jz4b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.614668 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc52d4ff-3262-4e91-a698-93c679395293\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e115b920096378945da1bc7b1422541774ffb1329e6ea61f195b4fd06c347a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c16788be903a59e1ffce300ca7458956a27211bffd71cce9408a6abafe5f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:06:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.632712 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.648272 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4jfxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"920835de-d258-45c0-beaa-c478dddb38e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cedba660f1b02e4d270dac83d737fef0d3776dc8325d7d919d21f974c0907633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7wx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T00:07:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4jfxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.652682 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.652728 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.652741 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.652761 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.652773 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:23Z","lastTransitionTime":"2026-02-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.664336 4953 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T00:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a0da8b6fdfb33463666c05ca51ca5e1dfb99249ef51bfc65abbe77a460a85e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T00:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T00:08:23Z is after 2025-08-24T17:21:41Z" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.755920 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.755990 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.756007 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.756034 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.756055 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:23Z","lastTransitionTime":"2026-02-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.858919 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.858970 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.858981 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.858998 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.859010 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:23Z","lastTransitionTime":"2026-02-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.960894 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.960944 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.960957 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.960974 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:23 crc kubenswrapper[4953]: I0223 00:08:23.960989 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:23Z","lastTransitionTime":"2026-02-23T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.064016 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.064102 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.064123 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.064150 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.064170 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:24Z","lastTransitionTime":"2026-02-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.167409 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.167466 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.167477 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.167497 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.167510 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:24Z","lastTransitionTime":"2026-02-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.206887 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs\") pod \"network-metrics-daemon-wppgs\" (UID: \"71837ac6-9a75-4640-af98-633ccdd09e20\") " pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:24 crc kubenswrapper[4953]: E0223 00:08:24.207138 4953 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 00:08:24 crc kubenswrapper[4953]: E0223 00:08:24.207247 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs podName:71837ac6-9a75-4640-af98-633ccdd09e20 nodeName:}" failed. No retries permitted until 2026-02-23 00:09:28.207219818 +0000 UTC m=+166.141061714 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs") pod "network-metrics-daemon-wppgs" (UID: "71837ac6-9a75-4640-af98-633ccdd09e20") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.269540 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.269581 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.269590 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.269604 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.269614 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:24Z","lastTransitionTime":"2026-02-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.307025 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 02:53:10.192931118 +0000 UTC Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.325330 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.325347 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:24 crc kubenswrapper[4953]: E0223 00:08:24.325611 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:24 crc kubenswrapper[4953]: E0223 00:08:24.325457 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.372851 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.372934 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.372953 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.372985 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.373006 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:24Z","lastTransitionTime":"2026-02-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.476398 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.476454 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.476470 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.476495 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.476511 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:24Z","lastTransitionTime":"2026-02-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.579170 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.579240 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.579258 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.579320 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.579342 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:24Z","lastTransitionTime":"2026-02-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.683065 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.683154 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.683181 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.683215 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.683241 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:24Z","lastTransitionTime":"2026-02-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.786881 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.786920 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.786931 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.786946 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.786960 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:24Z","lastTransitionTime":"2026-02-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.890212 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.890283 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.890320 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.890343 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.890361 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:24Z","lastTransitionTime":"2026-02-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.994263 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.994343 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.994356 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.994374 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:24 crc kubenswrapper[4953]: I0223 00:08:24.994387 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:24Z","lastTransitionTime":"2026-02-23T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.098898 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.099009 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.099037 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.099068 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.099088 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:25Z","lastTransitionTime":"2026-02-23T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.201672 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.201731 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.201741 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.201757 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.201782 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:25Z","lastTransitionTime":"2026-02-23T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.304685 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.304734 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.304752 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.304770 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.304780 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:25Z","lastTransitionTime":"2026-02-23T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.307957 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 08:56:16.050458858 +0000 UTC Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.325676 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.325720 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:25 crc kubenswrapper[4953]: E0223 00:08:25.325815 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:25 crc kubenswrapper[4953]: E0223 00:08:25.326055 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.408675 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.408741 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.408761 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.408783 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.408800 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:25Z","lastTransitionTime":"2026-02-23T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.511762 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.512081 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.512111 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.512141 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.512162 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:25Z","lastTransitionTime":"2026-02-23T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.615720 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.615760 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.615771 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.615787 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.615798 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:25Z","lastTransitionTime":"2026-02-23T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.718520 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.718588 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.718601 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.718621 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.718634 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:25Z","lastTransitionTime":"2026-02-23T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.821253 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.821314 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.821327 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.821342 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.821352 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:25Z","lastTransitionTime":"2026-02-23T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.923459 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.923567 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.923587 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.923610 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:25 crc kubenswrapper[4953]: I0223 00:08:25.923625 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:25Z","lastTransitionTime":"2026-02-23T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.026738 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.026799 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.026822 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.026844 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.026866 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:26Z","lastTransitionTime":"2026-02-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.129524 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.129568 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.129579 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.129596 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.129610 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:26Z","lastTransitionTime":"2026-02-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.232463 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.232514 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.232525 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.232555 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.232568 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:26Z","lastTransitionTime":"2026-02-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.309026 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 08:25:19.114258705 +0000 UTC Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.325605 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.325640 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:26 crc kubenswrapper[4953]: E0223 00:08:26.325822 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:26 crc kubenswrapper[4953]: E0223 00:08:26.326005 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.335658 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.335709 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.335727 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.335749 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.335766 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:26Z","lastTransitionTime":"2026-02-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.438945 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.439002 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.439017 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.439043 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.439061 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:26Z","lastTransitionTime":"2026-02-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.542095 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.542182 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.542198 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.542223 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.542240 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:26Z","lastTransitionTime":"2026-02-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.658695 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.658764 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.658781 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.658805 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.658822 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:26Z","lastTransitionTime":"2026-02-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.761965 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.762023 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.762102 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.762134 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.762156 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:26Z","lastTransitionTime":"2026-02-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.865072 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.865513 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.865721 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.865826 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.865913 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:26Z","lastTransitionTime":"2026-02-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.967826 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.968238 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.968437 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.968607 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:26 crc kubenswrapper[4953]: I0223 00:08:26.968762 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:26Z","lastTransitionTime":"2026-02-23T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.071605 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.071968 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.072120 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.072309 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.072475 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:27Z","lastTransitionTime":"2026-02-23T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.175078 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.175458 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.175616 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.175764 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.175906 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:27Z","lastTransitionTime":"2026-02-23T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.280935 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.280999 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.281009 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.281024 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.281033 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:27Z","lastTransitionTime":"2026-02-23T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.309141 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 19:24:32.291320132 +0000 UTC Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.325514 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.325676 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:27 crc kubenswrapper[4953]: E0223 00:08:27.325874 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:27 crc kubenswrapper[4953]: E0223 00:08:27.326177 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.383785 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.384175 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.384367 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.384537 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.384714 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:27Z","lastTransitionTime":"2026-02-23T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.488430 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.488490 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.488508 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.488531 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.488546 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:27Z","lastTransitionTime":"2026-02-23T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.591154 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.591503 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.591600 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.591714 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.591815 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:27Z","lastTransitionTime":"2026-02-23T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.693452 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.693496 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.693508 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.693526 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.693541 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:27Z","lastTransitionTime":"2026-02-23T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.795523 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.795805 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.795913 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.796016 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.796101 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:27Z","lastTransitionTime":"2026-02-23T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.898070 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.898105 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.898116 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.898131 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:27 crc kubenswrapper[4953]: I0223 00:08:27.898142 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:27Z","lastTransitionTime":"2026-02-23T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.001333 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.001374 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.001385 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.001402 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.001413 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:28Z","lastTransitionTime":"2026-02-23T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.102941 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.102975 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.102986 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.103002 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.103012 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:28Z","lastTransitionTime":"2026-02-23T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.205097 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.205157 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.205178 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.205213 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.205234 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:28Z","lastTransitionTime":"2026-02-23T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.307600 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.307645 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.307655 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.307672 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.307684 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:28Z","lastTransitionTime":"2026-02-23T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.310772 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 11:38:24.111329453 +0000 UTC Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.326082 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.326209 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:28 crc kubenswrapper[4953]: E0223 00:08:28.326358 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:28 crc kubenswrapper[4953]: E0223 00:08:28.326596 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.410035 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.410071 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.410083 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.410098 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.410110 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:28Z","lastTransitionTime":"2026-02-23T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.511741 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.511775 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.511783 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.511795 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.511804 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:28Z","lastTransitionTime":"2026-02-23T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.614085 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.614139 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.614150 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.614168 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.614180 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:28Z","lastTransitionTime":"2026-02-23T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.716437 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.716668 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.716772 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.716869 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.716946 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:28Z","lastTransitionTime":"2026-02-23T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.819622 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.819660 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.819722 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.819738 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.819752 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:28Z","lastTransitionTime":"2026-02-23T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.921803 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.921826 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.921834 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.921845 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:28 crc kubenswrapper[4953]: I0223 00:08:28.921853 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:28Z","lastTransitionTime":"2026-02-23T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.023547 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.023793 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.023858 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.023924 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.023989 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:29Z","lastTransitionTime":"2026-02-23T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.126242 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.126278 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.126303 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.126317 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.126328 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:29Z","lastTransitionTime":"2026-02-23T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.228166 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.228415 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.228495 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.228567 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.228664 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:29Z","lastTransitionTime":"2026-02-23T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.311718 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 08:47:13.37608154 +0000 UTC Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.326046 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.326233 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:29 crc kubenswrapper[4953]: E0223 00:08:29.326334 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:29 crc kubenswrapper[4953]: E0223 00:08:29.326566 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.330786 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.330865 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.330877 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.330891 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.330904 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:29Z","lastTransitionTime":"2026-02-23T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.433038 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.433130 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.433155 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.433202 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.433241 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:29Z","lastTransitionTime":"2026-02-23T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.535896 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.535939 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.535952 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.535969 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.535982 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:29Z","lastTransitionTime":"2026-02-23T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.638048 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.638119 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.638132 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.638149 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.638161 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:29Z","lastTransitionTime":"2026-02-23T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.740520 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.740587 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.740604 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.740627 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.740647 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:29Z","lastTransitionTime":"2026-02-23T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.843220 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.843317 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.843341 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.843398 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.843417 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:29Z","lastTransitionTime":"2026-02-23T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.946181 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.946235 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.946247 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.946263 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:29 crc kubenswrapper[4953]: I0223 00:08:29.946275 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:29Z","lastTransitionTime":"2026-02-23T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.049452 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.049537 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.049562 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.049598 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.049623 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:30Z","lastTransitionTime":"2026-02-23T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.151936 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.151997 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.152014 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.152035 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.152052 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:30Z","lastTransitionTime":"2026-02-23T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.254516 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.254545 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.254554 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.254567 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.254575 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:30Z","lastTransitionTime":"2026-02-23T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.312464 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:23:04.489301075 +0000 UTC Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.325772 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.325851 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:30 crc kubenswrapper[4953]: E0223 00:08:30.325976 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:30 crc kubenswrapper[4953]: E0223 00:08:30.326376 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.356740 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.356781 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.356793 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.356808 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.356820 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:30Z","lastTransitionTime":"2026-02-23T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.459080 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.459107 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.459115 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.459128 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.459137 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:30Z","lastTransitionTime":"2026-02-23T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.561543 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.561583 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.561594 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.561611 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.561625 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:30Z","lastTransitionTime":"2026-02-23T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.663386 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.663421 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.663430 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.663443 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.663452 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:30Z","lastTransitionTime":"2026-02-23T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.765832 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.765872 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.765885 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.765901 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.765912 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:30Z","lastTransitionTime":"2026-02-23T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.868120 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.868188 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.868211 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.868241 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.868259 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:30Z","lastTransitionTime":"2026-02-23T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.971772 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.971807 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.971816 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.971831 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:30 crc kubenswrapper[4953]: I0223 00:08:30.971840 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:30Z","lastTransitionTime":"2026-02-23T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.075185 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.075233 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.075252 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.075274 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.075330 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:31Z","lastTransitionTime":"2026-02-23T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.178360 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.178423 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.178442 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.178467 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.178483 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:31Z","lastTransitionTime":"2026-02-23T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.281135 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.281171 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.281182 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.281198 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.281210 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:31Z","lastTransitionTime":"2026-02-23T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.383363 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.383392 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.383400 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.383412 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.383420 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:31Z","lastTransitionTime":"2026-02-23T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.486148 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.486205 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.486216 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.486231 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.486243 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:31Z","lastTransitionTime":"2026-02-23T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.488714 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:26:22.558441004 +0000 UTC Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.488780 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:31 crc kubenswrapper[4953]: E0223 00:08:31.488912 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.489038 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.489199 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:31 crc kubenswrapper[4953]: E0223 00:08:31.489506 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:31 crc kubenswrapper[4953]: E0223 00:08:31.489965 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.490154 4953 scope.go:117] "RemoveContainer" containerID="1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284" Feb 23 00:08:31 crc kubenswrapper[4953]: E0223 00:08:31.490319 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-69mr8_openshift-ovn-kubernetes(5937f1d2-1966-4337-b099-ad0af539fe11)\"" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.502591 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.588915 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.588967 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.588982 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.589002 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.589016 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:31Z","lastTransitionTime":"2026-02-23T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.691948 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.691990 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.691998 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.692012 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.692021 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:31Z","lastTransitionTime":"2026-02-23T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.794298 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.794339 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.794350 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.794365 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.794674 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:31Z","lastTransitionTime":"2026-02-23T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.896687 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.896763 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.896782 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.896808 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.896826 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:31Z","lastTransitionTime":"2026-02-23T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.999025 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.999061 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.999074 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.999089 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:31 crc kubenswrapper[4953]: I0223 00:08:31.999099 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:31Z","lastTransitionTime":"2026-02-23T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.101062 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.101103 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.101117 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.101134 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.101147 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:32Z","lastTransitionTime":"2026-02-23T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.203771 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.203842 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.203859 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.203884 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.203901 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:32Z","lastTransitionTime":"2026-02-23T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.306327 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.306386 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.306403 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.306425 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.306441 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:32Z","lastTransitionTime":"2026-02-23T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.325758 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:32 crc kubenswrapper[4953]: E0223 00:08:32.326015 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.409416 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.409480 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.409488 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.409502 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.409511 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:32Z","lastTransitionTime":"2026-02-23T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.489121 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:47:37.328431224 +0000 UTC Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.511660 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.511701 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.511711 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.511726 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.511735 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:32Z","lastTransitionTime":"2026-02-23T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.614009 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.614050 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.614062 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.614078 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.614090 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:32Z","lastTransitionTime":"2026-02-23T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.716452 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.716498 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.716510 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.716527 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.716539 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:32Z","lastTransitionTime":"2026-02-23T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.818352 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.818401 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.818424 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.818450 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.818462 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:32Z","lastTransitionTime":"2026-02-23T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.833571 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.833631 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.833657 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.833700 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.833724 4953 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T00:08:32Z","lastTransitionTime":"2026-02-23T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.883891 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8"] Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.884326 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.887052 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.888364 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.888268 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.889081 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.900434 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6e4e624f-1bbf-4a79-bdb3-81f3afa6439b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2wmw8\" (UID: \"6e4e624f-1bbf-4a79-bdb3-81f3afa6439b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.900508 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=41.900495507 podStartE2EDuration="41.900495507s" podCreationTimestamp="2026-02-23 00:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:08:32.900449645 +0000 UTC m=+110.834291541" watchObservedRunningTime="2026-02-23 00:08:32.900495507 +0000 UTC m=+110.834337353" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.900580 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e4e624f-1bbf-4a79-bdb3-81f3afa6439b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2wmw8\" (UID: \"6e4e624f-1bbf-4a79-bdb3-81f3afa6439b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.900659 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6e4e624f-1bbf-4a79-bdb3-81f3afa6439b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2wmw8\" (UID: \"6e4e624f-1bbf-4a79-bdb3-81f3afa6439b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.900682 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6e4e624f-1bbf-4a79-bdb3-81f3afa6439b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2wmw8\" (UID: \"6e4e624f-1bbf-4a79-bdb3-81f3afa6439b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.900755 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e4e624f-1bbf-4a79-bdb3-81f3afa6439b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2wmw8\" (UID: \"6e4e624f-1bbf-4a79-bdb3-81f3afa6439b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.939857 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=55.939840253 podStartE2EDuration="55.939840253s" podCreationTimestamp="2026-02-23 00:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:08:32.939803782 +0000 UTC m=+110.873645638" watchObservedRunningTime="2026-02-23 00:08:32.939840253 +0000 UTC m=+110.873682099" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.940068 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=1.940051779 podStartE2EDuration="1.940051779s" podCreationTimestamp="2026-02-23 00:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:08:32.925791947 +0000 UTC m=+110.859633793" watchObservedRunningTime="2026-02-23 00:08:32.940051779 +0000 UTC m=+110.873893625" Feb 23 00:08:32 crc kubenswrapper[4953]: I0223 00:08:32.978645 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jz4b" podStartSLOduration=87.978621015 podStartE2EDuration="1m27.978621015s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:08:32.978301057 +0000 UTC m=+110.912142913" watchObservedRunningTime="2026-02-23 00:08:32.978621015 +0000 UTC m=+110.912462861" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:32.999937 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-4jfxl" podStartSLOduration=87.999920311 podStartE2EDuration="1m27.999920311s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:08:32.999761917 +0000 UTC m=+110.933603773" watchObservedRunningTime="2026-02-23 00:08:32.999920311 +0000 UTC m=+110.933762157" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.002055 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6e4e624f-1bbf-4a79-bdb3-81f3afa6439b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2wmw8\" (UID: \"6e4e624f-1bbf-4a79-bdb3-81f3afa6439b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.002115 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e4e624f-1bbf-4a79-bdb3-81f3afa6439b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2wmw8\" (UID: \"6e4e624f-1bbf-4a79-bdb3-81f3afa6439b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.002154 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6e4e624f-1bbf-4a79-bdb3-81f3afa6439b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2wmw8\" (UID: \"6e4e624f-1bbf-4a79-bdb3-81f3afa6439b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.002183 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6e4e624f-1bbf-4a79-bdb3-81f3afa6439b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2wmw8\" (UID: \"6e4e624f-1bbf-4a79-bdb3-81f3afa6439b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.002210 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6e4e624f-1bbf-4a79-bdb3-81f3afa6439b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2wmw8\" (UID: \"6e4e624f-1bbf-4a79-bdb3-81f3afa6439b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.002220 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e4e624f-1bbf-4a79-bdb3-81f3afa6439b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2wmw8\" (UID: \"6e4e624f-1bbf-4a79-bdb3-81f3afa6439b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.002352 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6e4e624f-1bbf-4a79-bdb3-81f3afa6439b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2wmw8\" (UID: \"6e4e624f-1bbf-4a79-bdb3-81f3afa6439b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.004310 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6e4e624f-1bbf-4a79-bdb3-81f3afa6439b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2wmw8\" (UID: \"6e4e624f-1bbf-4a79-bdb3-81f3afa6439b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.010060 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e4e624f-1bbf-4a79-bdb3-81f3afa6439b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2wmw8\" (UID: \"6e4e624f-1bbf-4a79-bdb3-81f3afa6439b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.024857 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e4e624f-1bbf-4a79-bdb3-81f3afa6439b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2wmw8\" (UID: \"6e4e624f-1bbf-4a79-bdb3-81f3afa6439b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.055537 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.055515201 podStartE2EDuration="1m28.055515201s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:08:33.055213643 +0000 UTC m=+110.989055499" watchObservedRunningTime="2026-02-23 00:08:33.055515201 +0000 UTC m=+110.989357057" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.071909 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.071889748 podStartE2EDuration="1m28.071889748s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:08:33.070160013 +0000 UTC m=+111.004001859" watchObservedRunningTime="2026-02-23 00:08:33.071889748 +0000 UTC m=+111.005731614" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.096063 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-sqwrp" podStartSLOduration=88.096009418 podStartE2EDuration="1m28.096009418s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:08:33.095941306 +0000 UTC m=+111.029783152" watchObservedRunningTime="2026-02-23 00:08:33.096009418 +0000 UTC m=+111.029851284" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.126807 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-pxzfb" podStartSLOduration=88.126773991 podStartE2EDuration="1m28.126773991s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:08:33.111396759 +0000 UTC m=+111.045238605" watchObservedRunningTime="2026-02-23 00:08:33.126773991 +0000 UTC m=+111.060615837" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.126977 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-dw5dv" podStartSLOduration=88.126972906 podStartE2EDuration="1m28.126972906s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:08:33.126579566 +0000 UTC m=+111.060421422" watchObservedRunningTime="2026-02-23 00:08:33.126972906 +0000 UTC m=+111.060814762" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.171777 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podStartSLOduration=88.171759975 podStartE2EDuration="1m28.171759975s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:08:33.171107708 +0000 UTC m=+111.104949564" watchObservedRunningTime="2026-02-23 00:08:33.171759975 +0000 UTC m=+111.105601821" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.211676 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.330205 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.330310 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:33 crc kubenswrapper[4953]: E0223 00:08:33.331637 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.331728 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:33 crc kubenswrapper[4953]: E0223 00:08:33.331835 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:33 crc kubenswrapper[4953]: E0223 00:08:33.331921 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.489705 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 05:22:51.008145651 +0000 UTC Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.489767 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.497709 4953 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.986100 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" event={"ID":"6e4e624f-1bbf-4a79-bdb3-81f3afa6439b","Type":"ContainerStarted","Data":"731c3cd55269ce3ede16fbcdf4dac95c8d7f64b781d4a90b9d2acfc18f2fec25"} Feb 23 00:08:33 crc kubenswrapper[4953]: I0223 00:08:33.986703 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" event={"ID":"6e4e624f-1bbf-4a79-bdb3-81f3afa6439b","Type":"ContainerStarted","Data":"1393b412a419fa5431232da757a24a11a604b3f08935c12261f0fe169fb8ec97"} Feb 23 00:08:34 crc kubenswrapper[4953]: I0223 00:08:34.002011 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wmw8" podStartSLOduration=89.00198834 podStartE2EDuration="1m29.00198834s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:08:34.000783519 +0000 UTC m=+111.934625385" watchObservedRunningTime="2026-02-23 00:08:34.00198834 +0000 UTC m=+111.935830196" Feb 23 00:08:34 crc kubenswrapper[4953]: I0223 00:08:34.325362 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:34 crc kubenswrapper[4953]: E0223 00:08:34.325505 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:35 crc kubenswrapper[4953]: I0223 00:08:35.325791 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:35 crc kubenswrapper[4953]: I0223 00:08:35.325772 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:35 crc kubenswrapper[4953]: I0223 00:08:35.325885 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:35 crc kubenswrapper[4953]: E0223 00:08:35.326040 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:35 crc kubenswrapper[4953]: E0223 00:08:35.326196 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:35 crc kubenswrapper[4953]: E0223 00:08:35.326281 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:36 crc kubenswrapper[4953]: I0223 00:08:36.325851 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:36 crc kubenswrapper[4953]: E0223 00:08:36.326011 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:37 crc kubenswrapper[4953]: I0223 00:08:37.326199 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:37 crc kubenswrapper[4953]: I0223 00:08:37.326253 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:37 crc kubenswrapper[4953]: I0223 00:08:37.326210 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:37 crc kubenswrapper[4953]: E0223 00:08:37.326448 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:37 crc kubenswrapper[4953]: E0223 00:08:37.326523 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:37 crc kubenswrapper[4953]: E0223 00:08:37.326606 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:38 crc kubenswrapper[4953]: I0223 00:08:38.326050 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:38 crc kubenswrapper[4953]: E0223 00:08:38.326228 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:39 crc kubenswrapper[4953]: I0223 00:08:39.326042 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:39 crc kubenswrapper[4953]: I0223 00:08:39.326168 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:39 crc kubenswrapper[4953]: I0223 00:08:39.326401 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:39 crc kubenswrapper[4953]: E0223 00:08:39.326431 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:39 crc kubenswrapper[4953]: E0223 00:08:39.326756 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:39 crc kubenswrapper[4953]: E0223 00:08:39.326980 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:40 crc kubenswrapper[4953]: I0223 00:08:40.326236 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:40 crc kubenswrapper[4953]: E0223 00:08:40.326405 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:41 crc kubenswrapper[4953]: I0223 00:08:41.325657 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:41 crc kubenswrapper[4953]: I0223 00:08:41.325774 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:41 crc kubenswrapper[4953]: E0223 00:08:41.325836 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:41 crc kubenswrapper[4953]: E0223 00:08:41.325959 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:41 crc kubenswrapper[4953]: I0223 00:08:41.326088 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:41 crc kubenswrapper[4953]: E0223 00:08:41.326198 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:42 crc kubenswrapper[4953]: I0223 00:08:42.012844 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pxzfb_c6ae22b1-a5f9-483a-be3d-32cfb7d516d5/kube-multus/1.log" Feb 23 00:08:42 crc kubenswrapper[4953]: I0223 00:08:42.013571 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pxzfb_c6ae22b1-a5f9-483a-be3d-32cfb7d516d5/kube-multus/0.log" Feb 23 00:08:42 crc kubenswrapper[4953]: I0223 00:08:42.013646 4953 generic.go:334] "Generic (PLEG): container finished" podID="c6ae22b1-a5f9-483a-be3d-32cfb7d516d5" containerID="47d5a3bedeb2d2baf8bec6932b5cad209fb75c63bbca5f89ad0fba12c6c921ce" exitCode=1 Feb 23 00:08:42 crc kubenswrapper[4953]: I0223 00:08:42.013697 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pxzfb" event={"ID":"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5","Type":"ContainerDied","Data":"47d5a3bedeb2d2baf8bec6932b5cad209fb75c63bbca5f89ad0fba12c6c921ce"} Feb 23 00:08:42 crc kubenswrapper[4953]: I0223 00:08:42.013764 4953 scope.go:117] "RemoveContainer" containerID="c44d157325d99891e7c696a21028acd372536fcadd054fc24422ab989c797371" Feb 23 00:08:42 crc kubenswrapper[4953]: I0223 00:08:42.014107 4953 scope.go:117] "RemoveContainer" containerID="47d5a3bedeb2d2baf8bec6932b5cad209fb75c63bbca5f89ad0fba12c6c921ce" Feb 23 00:08:42 crc kubenswrapper[4953]: E0223 00:08:42.014269 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-pxzfb_openshift-multus(c6ae22b1-a5f9-483a-be3d-32cfb7d516d5)\"" pod="openshift-multus/multus-pxzfb" podUID="c6ae22b1-a5f9-483a-be3d-32cfb7d516d5" Feb 23 00:08:42 crc kubenswrapper[4953]: I0223 00:08:42.325508 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:42 crc kubenswrapper[4953]: E0223 00:08:42.325636 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:43 crc kubenswrapper[4953]: I0223 00:08:43.018184 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pxzfb_c6ae22b1-a5f9-483a-be3d-32cfb7d516d5/kube-multus/1.log" Feb 23 00:08:43 crc kubenswrapper[4953]: E0223 00:08:43.296406 4953 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 23 00:08:43 crc kubenswrapper[4953]: I0223 00:08:43.327413 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:43 crc kubenswrapper[4953]: I0223 00:08:43.329524 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:43 crc kubenswrapper[4953]: I0223 00:08:43.329575 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:43 crc kubenswrapper[4953]: E0223 00:08:43.329604 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:43 crc kubenswrapper[4953]: E0223 00:08:43.329517 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:43 crc kubenswrapper[4953]: E0223 00:08:43.329693 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:43 crc kubenswrapper[4953]: E0223 00:08:43.443628 4953 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 00:08:44 crc kubenswrapper[4953]: I0223 00:08:44.325550 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:44 crc kubenswrapper[4953]: E0223 00:08:44.326003 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:45 crc kubenswrapper[4953]: I0223 00:08:45.325870 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:45 crc kubenswrapper[4953]: I0223 00:08:45.325929 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:45 crc kubenswrapper[4953]: I0223 00:08:45.325888 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:45 crc kubenswrapper[4953]: E0223 00:08:45.326112 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:45 crc kubenswrapper[4953]: E0223 00:08:45.326773 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:45 crc kubenswrapper[4953]: E0223 00:08:45.327051 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:45 crc kubenswrapper[4953]: I0223 00:08:45.327390 4953 scope.go:117] "RemoveContainer" containerID="1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284" Feb 23 00:08:46 crc kubenswrapper[4953]: I0223 00:08:46.028091 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovnkube-controller/3.log" Feb 23 00:08:46 crc kubenswrapper[4953]: I0223 00:08:46.030478 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerStarted","Data":"2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff"} Feb 23 00:08:46 crc kubenswrapper[4953]: I0223 00:08:46.030811 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:08:46 crc kubenswrapper[4953]: I0223 00:08:46.055708 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" podStartSLOduration=101.055682702 podStartE2EDuration="1m41.055682702s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:08:46.054042239 +0000 UTC m=+123.987884105" watchObservedRunningTime="2026-02-23 00:08:46.055682702 +0000 UTC m=+123.989524588" Feb 23 00:08:46 crc kubenswrapper[4953]: I0223 00:08:46.175683 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wppgs"] Feb 23 00:08:46 crc kubenswrapper[4953]: I0223 00:08:46.175832 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:46 crc kubenswrapper[4953]: E0223 00:08:46.175974 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:46 crc kubenswrapper[4953]: I0223 00:08:46.325561 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:46 crc kubenswrapper[4953]: E0223 00:08:46.325683 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:47 crc kubenswrapper[4953]: I0223 00:08:47.325680 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:47 crc kubenswrapper[4953]: E0223 00:08:47.326361 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:47 crc kubenswrapper[4953]: I0223 00:08:47.326503 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:47 crc kubenswrapper[4953]: E0223 00:08:47.326714 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:48 crc kubenswrapper[4953]: I0223 00:08:48.325900 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:48 crc kubenswrapper[4953]: I0223 00:08:48.325940 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:48 crc kubenswrapper[4953]: E0223 00:08:48.326061 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:48 crc kubenswrapper[4953]: E0223 00:08:48.326138 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:48 crc kubenswrapper[4953]: E0223 00:08:48.445080 4953 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 00:08:49 crc kubenswrapper[4953]: I0223 00:08:49.325503 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:49 crc kubenswrapper[4953]: E0223 00:08:49.325678 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:49 crc kubenswrapper[4953]: I0223 00:08:49.325978 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:49 crc kubenswrapper[4953]: E0223 00:08:49.326069 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:50 crc kubenswrapper[4953]: I0223 00:08:50.325964 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:50 crc kubenswrapper[4953]: I0223 00:08:50.325965 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:50 crc kubenswrapper[4953]: E0223 00:08:50.326160 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:50 crc kubenswrapper[4953]: E0223 00:08:50.326278 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:51 crc kubenswrapper[4953]: I0223 00:08:51.325908 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:51 crc kubenswrapper[4953]: E0223 00:08:51.326056 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:51 crc kubenswrapper[4953]: I0223 00:08:51.326087 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:51 crc kubenswrapper[4953]: E0223 00:08:51.326226 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:52 crc kubenswrapper[4953]: I0223 00:08:52.325604 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:52 crc kubenswrapper[4953]: I0223 00:08:52.325698 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:52 crc kubenswrapper[4953]: E0223 00:08:52.325825 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:52 crc kubenswrapper[4953]: E0223 00:08:52.325962 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:53 crc kubenswrapper[4953]: I0223 00:08:53.326211 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:53 crc kubenswrapper[4953]: E0223 00:08:53.328127 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:53 crc kubenswrapper[4953]: I0223 00:08:53.328246 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:53 crc kubenswrapper[4953]: E0223 00:08:53.328496 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:53 crc kubenswrapper[4953]: E0223 00:08:53.446755 4953 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 00:08:54 crc kubenswrapper[4953]: I0223 00:08:54.325788 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:54 crc kubenswrapper[4953]: I0223 00:08:54.325849 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:54 crc kubenswrapper[4953]: E0223 00:08:54.325990 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:54 crc kubenswrapper[4953]: E0223 00:08:54.326088 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:55 crc kubenswrapper[4953]: I0223 00:08:55.325928 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:55 crc kubenswrapper[4953]: E0223 00:08:55.326090 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:55 crc kubenswrapper[4953]: I0223 00:08:55.325954 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:55 crc kubenswrapper[4953]: E0223 00:08:55.326427 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:56 crc kubenswrapper[4953]: I0223 00:08:56.325577 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:56 crc kubenswrapper[4953]: I0223 00:08:56.325612 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:56 crc kubenswrapper[4953]: E0223 00:08:56.325709 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:56 crc kubenswrapper[4953]: E0223 00:08:56.325876 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:56 crc kubenswrapper[4953]: I0223 00:08:56.326170 4953 scope.go:117] "RemoveContainer" containerID="47d5a3bedeb2d2baf8bec6932b5cad209fb75c63bbca5f89ad0fba12c6c921ce" Feb 23 00:08:57 crc kubenswrapper[4953]: I0223 00:08:57.066080 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pxzfb_c6ae22b1-a5f9-483a-be3d-32cfb7d516d5/kube-multus/1.log" Feb 23 00:08:57 crc kubenswrapper[4953]: I0223 00:08:57.066130 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pxzfb" event={"ID":"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5","Type":"ContainerStarted","Data":"d3656a4f92aeabf073796fdda06705189896687219459312443c8d2846f004d0"} Feb 23 00:08:57 crc kubenswrapper[4953]: I0223 00:08:57.325775 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:57 crc kubenswrapper[4953]: E0223 00:08:57.326061 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 00:08:57 crc kubenswrapper[4953]: I0223 00:08:57.326133 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:57 crc kubenswrapper[4953]: E0223 00:08:57.326284 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 00:08:58 crc kubenswrapper[4953]: I0223 00:08:58.326329 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:08:58 crc kubenswrapper[4953]: I0223 00:08:58.326359 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:08:58 crc kubenswrapper[4953]: E0223 00:08:58.326513 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 00:08:58 crc kubenswrapper[4953]: E0223 00:08:58.326673 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wppgs" podUID="71837ac6-9a75-4640-af98-633ccdd09e20" Feb 23 00:08:59 crc kubenswrapper[4953]: I0223 00:08:59.325455 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:08:59 crc kubenswrapper[4953]: I0223 00:08:59.325492 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:08:59 crc kubenswrapper[4953]: I0223 00:08:59.327319 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 23 00:08:59 crc kubenswrapper[4953]: I0223 00:08:59.328108 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 00:08:59 crc kubenswrapper[4953]: I0223 00:08:59.328336 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 00:08:59 crc kubenswrapper[4953]: I0223 00:08:59.328364 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 23 00:09:00 crc kubenswrapper[4953]: I0223 00:09:00.326191 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:09:00 crc kubenswrapper[4953]: I0223 00:09:00.326193 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:09:00 crc kubenswrapper[4953]: I0223 00:09:00.328971 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 23 00:09:00 crc kubenswrapper[4953]: I0223 00:09:00.329054 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.074214 4953 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.129758 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-v2lhk"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.130731 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v2lhk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.134062 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.134507 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.134976 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.135357 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.135593 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.135714 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-flc99"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.135810 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.136625 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-flc99" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.140355 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.140850 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.140881 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.140943 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.141103 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.141099 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.141938 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.143038 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.143557 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.144059 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29530080-nw6hr"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.144607 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29530080-nw6hr" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.155937 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.157373 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 23 00:09:03 crc kubenswrapper[4953]: W0223 00:09:03.157995 4953 reflector.go:561] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 23 00:09:03 crc kubenswrapper[4953]: E0223 00:09:03.158066 4953 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.158619 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.158954 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.159228 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.160678 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.161658 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.161921 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.165623 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.166282 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.166847 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.167342 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.168399 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gdxz7"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.187024 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.188173 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gdxz7" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.190206 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.190800 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.191915 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-f6ptk"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.192750 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdkxk"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.193240 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdkxk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.193971 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.194244 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vwcpz"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.194850 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.195097 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.196219 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.196451 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.196486 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.196572 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.196626 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.196649 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-sbhgq"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.197005 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.197052 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.197341 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.197549 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.199805 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.200060 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.204652 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.205157 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xd97t"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.205548 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.205872 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-szx7x"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.205920 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.206203 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.208888 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dr5n9"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.209253 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-p96zz"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.209618 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.209793 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-dr5n9" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.210586 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sb7m5"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.211118 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.211955 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.214038 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.214077 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.214114 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.214283 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.214323 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.214509 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.214525 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.214558 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.214589 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.214609 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.214724 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.214830 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.214927 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.215028 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.215204 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rzc24"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.215353 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.215527 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.215641 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.216143 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rzc24" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.226974 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.227252 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.227516 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.227686 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.228278 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.228553 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.228725 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.228937 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-f62s6"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.229196 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.229769 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.229782 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.229925 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.230002 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.230518 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.230675 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7402d900-4b66-4c9d-8904-b35c3d4b06c7-serving-cert\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.230725 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7402d900-4b66-4c9d-8904-b35c3d4b06c7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.230760 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncnx8\" (UniqueName: \"kubernetes.io/projected/d5581f85-7931-4952-9fb8-4829ef46865a-kube-api-access-ncnx8\") pod \"machine-approver-56656f9798-v2lhk\" (UID: \"d5581f85-7931-4952-9fb8-4829ef46865a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v2lhk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.230798 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6643e75-78f7-40fc-b597-d37fa9381727-config\") pod \"machine-api-operator-5694c8668f-flc99\" (UID: \"f6643e75-78f7-40fc-b597-d37fa9381727\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flc99" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.230830 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7402d900-4b66-4c9d-8904-b35c3d4b06c7-etcd-client\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.230852 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d5581f85-7931-4952-9fb8-4829ef46865a-auth-proxy-config\") pod \"machine-approver-56656f9798-v2lhk\" (UID: \"d5581f85-7931-4952-9fb8-4829ef46865a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v2lhk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.230927 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.231087 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.231244 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d5581f85-7931-4952-9fb8-4829ef46865a-machine-approver-tls\") pod \"machine-approver-56656f9798-v2lhk\" (UID: \"d5581f85-7931-4952-9fb8-4829ef46865a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v2lhk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.231308 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7402d900-4b66-4c9d-8904-b35c3d4b06c7-audit-policies\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.231341 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gqn7\" (UniqueName: \"kubernetes.io/projected/7402d900-4b66-4c9d-8904-b35c3d4b06c7-kube-api-access-4gqn7\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.231398 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5581f85-7931-4952-9fb8-4829ef46865a-config\") pod \"machine-approver-56656f9798-v2lhk\" (UID: \"d5581f85-7931-4952-9fb8-4829ef46865a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v2lhk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.231422 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7402d900-4b66-4c9d-8904-b35c3d4b06c7-audit-dir\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.231453 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f6643e75-78f7-40fc-b597-d37fa9381727-images\") pod \"machine-api-operator-5694c8668f-flc99\" (UID: \"f6643e75-78f7-40fc-b597-d37fa9381727\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flc99" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.231481 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdkgd\" (UniqueName: \"kubernetes.io/projected/f6643e75-78f7-40fc-b597-d37fa9381727-kube-api-access-xdkgd\") pod \"machine-api-operator-5694c8668f-flc99\" (UID: \"f6643e75-78f7-40fc-b597-d37fa9381727\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flc99" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.231541 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7402d900-4b66-4c9d-8904-b35c3d4b06c7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.231603 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f6643e75-78f7-40fc-b597-d37fa9381727-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-flc99\" (UID: \"f6643e75-78f7-40fc-b597-d37fa9381727\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flc99" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.231632 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7402d900-4b66-4c9d-8904-b35c3d4b06c7-encryption-config\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.231458 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.231643 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.232170 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.232228 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.232169 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.232409 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.232855 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.233004 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.234200 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.234566 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f62s6" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.238779 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.240898 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.241621 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.247021 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.247146 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.247021 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.247258 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.249749 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.249899 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.250970 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.251182 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.251362 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.251792 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.251958 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.252115 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.252228 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.252319 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.252375 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.253078 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.254684 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9x56p"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.255280 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjp8z"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.255652 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjp8z" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.255842 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.256722 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9x56p" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.256831 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.256943 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.256960 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.262419 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.264257 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.267997 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gp45"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.268690 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gp45" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.270927 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.272478 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.272596 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.272604 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t882k"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.272838 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.272992 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.273301 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t882k" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.274364 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.275595 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t5h8l"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.276132 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t5h8l" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.276576 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-blqz2"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.277216 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-blqz2" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.278506 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4d7k"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.279049 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4d7k" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.285392 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pjdts"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.285956 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dzswb"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.286394 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.286710 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.287443 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dzswb" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.287546 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.297147 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t92pb"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.297704 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-t92pb" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.302644 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.303248 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-flc99"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.306971 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mrwl8"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.313073 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.314667 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mrwl8" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.316204 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vtp7m"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.317250 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9sjq"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.317368 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.317687 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9sjq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.317873 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.318522 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.318942 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-x9fkx"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.319947 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-x9fkx" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.320156 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.320725 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.321416 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gdxz7"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.322546 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.324503 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.326058 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-szx7x"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.332154 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7402d900-4b66-4c9d-8904-b35c3d4b06c7-serving-cert\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.332188 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-service-ca\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.332204 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7402d900-4b66-4c9d-8904-b35c3d4b06c7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.332220 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncnx8\" (UniqueName: \"kubernetes.io/projected/d5581f85-7931-4952-9fb8-4829ef46865a-kube-api-access-ncnx8\") pod \"machine-approver-56656f9798-v2lhk\" (UID: \"d5581f85-7931-4952-9fb8-4829ef46865a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v2lhk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.332237 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0f48e137-1959-43ff-8386-e7560140f2d4-etcd-serving-ca\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.332252 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrj2v\" (UniqueName: \"kubernetes.io/projected/bb747d6b-337e-4040-948b-88b262efd03b-kube-api-access-jrj2v\") pod \"etcd-operator-b45778765-szx7x\" (UID: \"bb747d6b-337e-4040-948b-88b262efd03b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.332261 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.332268 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6643e75-78f7-40fc-b597-d37fa9381727-config\") pod \"machine-api-operator-5694c8668f-flc99\" (UID: \"f6643e75-78f7-40fc-b597-d37fa9381727\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flc99" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.332299 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h4nf\" (UniqueName: \"kubernetes.io/projected/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-kube-api-access-2h4nf\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.332315 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb747d6b-337e-4040-948b-88b262efd03b-config\") pod \"etcd-operator-b45778765-szx7x\" (UID: \"bb747d6b-337e-4040-948b-88b262efd03b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.332331 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-console-serving-cert\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.332349 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7402d900-4b66-4c9d-8904-b35c3d4b06c7-etcd-client\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.332364 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d5581f85-7931-4952-9fb8-4829ef46865a-machine-approver-tls\") pod \"machine-approver-56656f9798-v2lhk\" (UID: \"d5581f85-7931-4952-9fb8-4829ef46865a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v2lhk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.332379 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vnr4\" (UniqueName: \"kubernetes.io/projected/26de90f6-24f2-4b9a-b60e-8dcc999890e8-kube-api-access-5vnr4\") pod \"authentication-operator-69f744f599-xd97t\" (UID: \"26de90f6-24f2-4b9a-b60e-8dcc999890e8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.332398 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d5581f85-7931-4952-9fb8-4829ef46865a-auth-proxy-config\") pod \"machine-approver-56656f9798-v2lhk\" (UID: \"d5581f85-7931-4952-9fb8-4829ef46865a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v2lhk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.332414 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-console-oauth-config\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.332438 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-oauth-serving-cert\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.332453 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j42w\" (UniqueName: \"kubernetes.io/projected/0f48e137-1959-43ff-8386-e7560140f2d4-kube-api-access-4j42w\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.332469 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26de90f6-24f2-4b9a-b60e-8dcc999890e8-service-ca-bundle\") pod \"authentication-operator-69f744f599-xd97t\" (UID: \"26de90f6-24f2-4b9a-b60e-8dcc999890e8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333094 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts4wn\" (UniqueName: \"kubernetes.io/projected/f725846e-4a85-49f1-9f43-bf53a4f066db-kube-api-access-ts4wn\") pod \"console-operator-58897d9998-dr5n9\" (UID: \"f725846e-4a85-49f1-9f43-bf53a4f066db\") " pod="openshift-console-operator/console-operator-58897d9998-dr5n9" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333122 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bb747d6b-337e-4040-948b-88b262efd03b-etcd-ca\") pod \"etcd-operator-b45778765-szx7x\" (UID: \"bb747d6b-337e-4040-948b-88b262efd03b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333176 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7402d900-4b66-4c9d-8904-b35c3d4b06c7-audit-policies\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333193 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gqn7\" (UniqueName: \"kubernetes.io/projected/7402d900-4b66-4c9d-8904-b35c3d4b06c7-kube-api-access-4gqn7\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333199 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7402d900-4b66-4c9d-8904-b35c3d4b06c7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333211 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b88234f7-8355-4c3b-a5f3-6195bbe46bee-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gdxz7\" (UID: \"b88234f7-8355-4c3b-a5f3-6195bbe46bee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gdxz7" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333230 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0f48e137-1959-43ff-8386-e7560140f2d4-audit-dir\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333248 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lggtp\" (UniqueName: \"kubernetes.io/projected/87070b2f-f30b-445a-8134-6708a6e6790e-kube-api-access-lggtp\") pod \"catalog-operator-68c6474976-9gcsg\" (UID: \"87070b2f-f30b-445a-8134-6708a6e6790e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333319 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d094612c-465f-4bec-b3f8-bf28a1471217-bound-sa-token\") pod \"ingress-operator-5b745b69d9-t4h9d\" (UID: \"d094612c-465f-4bec-b3f8-bf28a1471217\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333350 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b87g\" (UniqueName: \"kubernetes.io/projected/d094612c-465f-4bec-b3f8-bf28a1471217-kube-api-access-8b87g\") pod \"ingress-operator-5b745b69d9-t4h9d\" (UID: \"d094612c-465f-4bec-b3f8-bf28a1471217\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333374 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f725846e-4a85-49f1-9f43-bf53a4f066db-config\") pod \"console-operator-58897d9998-dr5n9\" (UID: \"f725846e-4a85-49f1-9f43-bf53a4f066db\") " pod="openshift-console-operator/console-operator-58897d9998-dr5n9" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333505 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f48e137-1959-43ff-8386-e7560140f2d4-serving-cert\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333567 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8edda615-0162-4066-98cc-3dd760855917-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g6rdl\" (UID: \"8edda615-0162-4066-98cc-3dd760855917\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333593 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb747d6b-337e-4040-948b-88b262efd03b-etcd-client\") pod \"etcd-operator-b45778765-szx7x\" (UID: \"bb747d6b-337e-4040-948b-88b262efd03b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333643 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5581f85-7931-4952-9fb8-4829ef46865a-config\") pod \"machine-approver-56656f9798-v2lhk\" (UID: \"d5581f85-7931-4952-9fb8-4829ef46865a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v2lhk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333665 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8af278f-dfdc-45c1-84da-df3c70951061-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rzc24\" (UID: \"a8af278f-dfdc-45c1-84da-df3c70951061\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rzc24" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333689 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0f48e137-1959-43ff-8386-e7560140f2d4-image-import-ca\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333707 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26de90f6-24f2-4b9a-b60e-8dcc999890e8-serving-cert\") pod \"authentication-operator-69f744f599-xd97t\" (UID: \"26de90f6-24f2-4b9a-b60e-8dcc999890e8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333771 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7402d900-4b66-4c9d-8904-b35c3d4b06c7-audit-policies\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333920 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1b0919d1-104f-4a33-b5df-d604cfa672f2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cj6c7\" (UID: \"1b0919d1-104f-4a33-b5df-d604cfa672f2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333956 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87070b2f-f30b-445a-8134-6708a6e6790e-srv-cert\") pod \"catalog-operator-68c6474976-9gcsg\" (UID: \"87070b2f-f30b-445a-8134-6708a6e6790e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333974 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d094612c-465f-4bec-b3f8-bf28a1471217-metrics-tls\") pod \"ingress-operator-5b745b69d9-t4h9d\" (UID: \"d094612c-465f-4bec-b3f8-bf28a1471217\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.333998 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f725846e-4a85-49f1-9f43-bf53a4f066db-serving-cert\") pod \"console-operator-58897d9998-dr5n9\" (UID: \"f725846e-4a85-49f1-9f43-bf53a4f066db\") " pod="openshift-console-operator/console-operator-58897d9998-dr5n9" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334017 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm9gf\" (UniqueName: \"kubernetes.io/projected/1b0919d1-104f-4a33-b5df-d604cfa672f2-kube-api-access-sm9gf\") pod \"openshift-config-operator-7777fb866f-cj6c7\" (UID: \"1b0919d1-104f-4a33-b5df-d604cfa672f2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334034 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7402d900-4b66-4c9d-8904-b35c3d4b06c7-audit-dir\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334068 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7402d900-4b66-4c9d-8904-b35c3d4b06c7-audit-dir\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334067 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d5581f85-7931-4952-9fb8-4829ef46865a-auth-proxy-config\") pod \"machine-approver-56656f9798-v2lhk\" (UID: \"d5581f85-7931-4952-9fb8-4829ef46865a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v2lhk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334086 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6643e75-78f7-40fc-b597-d37fa9381727-config\") pod \"machine-api-operator-5694c8668f-flc99\" (UID: \"f6643e75-78f7-40fc-b597-d37fa9381727\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flc99" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334102 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mztn\" (UniqueName: \"kubernetes.io/projected/a8af278f-dfdc-45c1-84da-df3c70951061-kube-api-access-5mztn\") pod \"package-server-manager-789f6589d5-rzc24\" (UID: \"a8af278f-dfdc-45c1-84da-df3c70951061\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rzc24" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334121 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f48e137-1959-43ff-8386-e7560140f2d4-config\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334138 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f6643e75-78f7-40fc-b597-d37fa9381727-images\") pod \"machine-api-operator-5694c8668f-flc99\" (UID: \"f6643e75-78f7-40fc-b597-d37fa9381727\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flc99" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334157 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdkgd\" (UniqueName: \"kubernetes.io/projected/f6643e75-78f7-40fc-b597-d37fa9381727-kube-api-access-xdkgd\") pod \"machine-api-operator-5694c8668f-flc99\" (UID: \"f6643e75-78f7-40fc-b597-d37fa9381727\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flc99" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334196 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5581f85-7931-4952-9fb8-4829ef46865a-config\") pod \"machine-approver-56656f9798-v2lhk\" (UID: \"d5581f85-7931-4952-9fb8-4829ef46865a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v2lhk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334248 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sjvr\" (UniqueName: \"kubernetes.io/projected/8edda615-0162-4066-98cc-3dd760855917-kube-api-access-2sjvr\") pod \"cluster-image-registry-operator-dc59b4c8b-g6rdl\" (UID: \"8edda615-0162-4066-98cc-3dd760855917\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334398 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb747d6b-337e-4040-948b-88b262efd03b-etcd-service-ca\") pod \"etcd-operator-b45778765-szx7x\" (UID: \"bb747d6b-337e-4040-948b-88b262efd03b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334432 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0f48e137-1959-43ff-8386-e7560140f2d4-etcd-client\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334556 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8edda615-0162-4066-98cc-3dd760855917-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g6rdl\" (UID: \"8edda615-0162-4066-98cc-3dd760855917\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334588 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0f48e137-1959-43ff-8386-e7560140f2d4-encryption-config\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334616 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b0919d1-104f-4a33-b5df-d604cfa672f2-serving-cert\") pod \"openshift-config-operator-7777fb866f-cj6c7\" (UID: \"1b0919d1-104f-4a33-b5df-d604cfa672f2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334635 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8edda615-0162-4066-98cc-3dd760855917-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g6rdl\" (UID: \"8edda615-0162-4066-98cc-3dd760855917\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334652 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f6643e75-78f7-40fc-b597-d37fa9381727-images\") pod \"machine-api-operator-5694c8668f-flc99\" (UID: \"f6643e75-78f7-40fc-b597-d37fa9381727\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flc99" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334689 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f48e137-1959-43ff-8386-e7560140f2d4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334723 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb747d6b-337e-4040-948b-88b262efd03b-serving-cert\") pod \"etcd-operator-b45778765-szx7x\" (UID: \"bb747d6b-337e-4040-948b-88b262efd03b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334772 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0f48e137-1959-43ff-8386-e7560140f2d4-node-pullsecrets\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334812 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-trusted-ca-bundle\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334839 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27hpj\" (UniqueName: \"kubernetes.io/projected/b88234f7-8355-4c3b-a5f3-6195bbe46bee-kube-api-access-27hpj\") pod \"cluster-samples-operator-665b6dd947-gdxz7\" (UID: \"b88234f7-8355-4c3b-a5f3-6195bbe46bee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gdxz7" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334883 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7402d900-4b66-4c9d-8904-b35c3d4b06c7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334919 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0f48e137-1959-43ff-8386-e7560140f2d4-audit\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.334940 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/87070b2f-f30b-445a-8134-6708a6e6790e-profile-collector-cert\") pod \"catalog-operator-68c6474976-9gcsg\" (UID: \"87070b2f-f30b-445a-8134-6708a6e6790e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.335422 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-console-config\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.335468 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7402d900-4b66-4c9d-8904-b35c3d4b06c7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.335472 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7402d900-4b66-4c9d-8904-b35c3d4b06c7-encryption-config\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.335517 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f725846e-4a85-49f1-9f43-bf53a4f066db-trusted-ca\") pod \"console-operator-58897d9998-dr5n9\" (UID: \"f725846e-4a85-49f1-9f43-bf53a4f066db\") " pod="openshift-console-operator/console-operator-58897d9998-dr5n9" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.335543 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26de90f6-24f2-4b9a-b60e-8dcc999890e8-config\") pod \"authentication-operator-69f744f599-xd97t\" (UID: \"26de90f6-24f2-4b9a-b60e-8dcc999890e8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.335581 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26de90f6-24f2-4b9a-b60e-8dcc999890e8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xd97t\" (UID: \"26de90f6-24f2-4b9a-b60e-8dcc999890e8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.335608 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f6643e75-78f7-40fc-b597-d37fa9381727-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-flc99\" (UID: \"f6643e75-78f7-40fc-b597-d37fa9381727\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flc99" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.338422 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7402d900-4b66-4c9d-8904-b35c3d4b06c7-etcd-client\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.338542 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d094612c-465f-4bec-b3f8-bf28a1471217-trusted-ca\") pod \"ingress-operator-5b745b69d9-t4h9d\" (UID: \"d094612c-465f-4bec-b3f8-bf28a1471217\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.338586 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7402d900-4b66-4c9d-8904-b35c3d4b06c7-serving-cert\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.338645 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29530080-nw6hr"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.338672 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vwcpz"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.338682 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdkxk"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.338691 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.338700 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-vr4sx"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.339484 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.339503 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sbhgq"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.339565 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vr4sx" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.339772 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.341587 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.342599 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7402d900-4b66-4c9d-8904-b35c3d4b06c7-encryption-config\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.346671 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d5581f85-7931-4952-9fb8-4829ef46865a-machine-approver-tls\") pod \"machine-approver-56656f9798-v2lhk\" (UID: \"d5581f85-7931-4952-9fb8-4829ef46865a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v2lhk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.352961 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f6643e75-78f7-40fc-b597-d37fa9381727-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-flc99\" (UID: \"f6643e75-78f7-40fc-b597-d37fa9381727\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flc99" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.357019 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.357784 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t5h8l"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.359721 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xd97t"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.364452 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t92pb"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.366100 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4d7k"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.367859 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rzc24"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.369769 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjp8z"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.372147 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pjdts"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.372502 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.374052 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-f62s6"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.375393 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-f6ptk"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.376792 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.378021 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-dztnb"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.378724 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dztnb" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.379533 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dr5n9"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.380580 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9x56p"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.381939 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t882k"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.383608 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-blqz2"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.388504 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sb7m5"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.390235 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gp45"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.391476 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vr4sx"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.392459 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.392792 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.393463 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9sjq"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.394436 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vtp7m"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.396141 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dzswb"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.396751 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mrwl8"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.397530 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.398392 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.399461 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-x9fkx"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.400503 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n5dbh"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.401480 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-hwdnz"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.401865 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.402027 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hwdnz" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.402455 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hwdnz"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.403686 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n5dbh"] Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.412172 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439134 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8edda615-0162-4066-98cc-3dd760855917-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g6rdl\" (UID: \"8edda615-0162-4066-98cc-3dd760855917\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439171 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8edda615-0162-4066-98cc-3dd760855917-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g6rdl\" (UID: \"8edda615-0162-4066-98cc-3dd760855917\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439188 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0f48e137-1959-43ff-8386-e7560140f2d4-encryption-config\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439204 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b0919d1-104f-4a33-b5df-d604cfa672f2-serving-cert\") pod \"openshift-config-operator-7777fb866f-cj6c7\" (UID: \"1b0919d1-104f-4a33-b5df-d604cfa672f2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439219 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f48e137-1959-43ff-8386-e7560140f2d4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439234 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb747d6b-337e-4040-948b-88b262efd03b-serving-cert\") pod \"etcd-operator-b45778765-szx7x\" (UID: \"bb747d6b-337e-4040-948b-88b262efd03b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439248 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0f48e137-1959-43ff-8386-e7560140f2d4-node-pullsecrets\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439273 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-trusted-ca-bundle\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439302 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27hpj\" (UniqueName: \"kubernetes.io/projected/b88234f7-8355-4c3b-a5f3-6195bbe46bee-kube-api-access-27hpj\") pod \"cluster-samples-operator-665b6dd947-gdxz7\" (UID: \"b88234f7-8355-4c3b-a5f3-6195bbe46bee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gdxz7" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439320 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/87070b2f-f30b-445a-8134-6708a6e6790e-profile-collector-cert\") pod \"catalog-operator-68c6474976-9gcsg\" (UID: \"87070b2f-f30b-445a-8134-6708a6e6790e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439334 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0f48e137-1959-43ff-8386-e7560140f2d4-audit\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439349 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-console-config\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439373 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26de90f6-24f2-4b9a-b60e-8dcc999890e8-config\") pod \"authentication-operator-69f744f599-xd97t\" (UID: \"26de90f6-24f2-4b9a-b60e-8dcc999890e8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439396 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26de90f6-24f2-4b9a-b60e-8dcc999890e8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xd97t\" (UID: \"26de90f6-24f2-4b9a-b60e-8dcc999890e8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439417 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f725846e-4a85-49f1-9f43-bf53a4f066db-trusted-ca\") pod \"console-operator-58897d9998-dr5n9\" (UID: \"f725846e-4a85-49f1-9f43-bf53a4f066db\") " pod="openshift-console-operator/console-operator-58897d9998-dr5n9" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439433 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d094612c-465f-4bec-b3f8-bf28a1471217-trusted-ca\") pod \"ingress-operator-5b745b69d9-t4h9d\" (UID: \"d094612c-465f-4bec-b3f8-bf28a1471217\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439448 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-service-ca\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439475 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0f48e137-1959-43ff-8386-e7560140f2d4-etcd-serving-ca\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439494 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrj2v\" (UniqueName: \"kubernetes.io/projected/bb747d6b-337e-4040-948b-88b262efd03b-kube-api-access-jrj2v\") pod \"etcd-operator-b45778765-szx7x\" (UID: \"bb747d6b-337e-4040-948b-88b262efd03b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439514 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h4nf\" (UniqueName: \"kubernetes.io/projected/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-kube-api-access-2h4nf\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439530 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb747d6b-337e-4040-948b-88b262efd03b-config\") pod \"etcd-operator-b45778765-szx7x\" (UID: \"bb747d6b-337e-4040-948b-88b262efd03b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439546 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-console-serving-cert\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439565 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vnr4\" (UniqueName: \"kubernetes.io/projected/26de90f6-24f2-4b9a-b60e-8dcc999890e8-kube-api-access-5vnr4\") pod \"authentication-operator-69f744f599-xd97t\" (UID: \"26de90f6-24f2-4b9a-b60e-8dcc999890e8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439582 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-console-oauth-config\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439597 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-oauth-serving-cert\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439614 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j42w\" (UniqueName: \"kubernetes.io/projected/0f48e137-1959-43ff-8386-e7560140f2d4-kube-api-access-4j42w\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439632 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26de90f6-24f2-4b9a-b60e-8dcc999890e8-service-ca-bundle\") pod \"authentication-operator-69f744f599-xd97t\" (UID: \"26de90f6-24f2-4b9a-b60e-8dcc999890e8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439647 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts4wn\" (UniqueName: \"kubernetes.io/projected/f725846e-4a85-49f1-9f43-bf53a4f066db-kube-api-access-ts4wn\") pod \"console-operator-58897d9998-dr5n9\" (UID: \"f725846e-4a85-49f1-9f43-bf53a4f066db\") " pod="openshift-console-operator/console-operator-58897d9998-dr5n9" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439662 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bb747d6b-337e-4040-948b-88b262efd03b-etcd-ca\") pod \"etcd-operator-b45778765-szx7x\" (UID: \"bb747d6b-337e-4040-948b-88b262efd03b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439686 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b88234f7-8355-4c3b-a5f3-6195bbe46bee-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gdxz7\" (UID: \"b88234f7-8355-4c3b-a5f3-6195bbe46bee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gdxz7" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439702 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0f48e137-1959-43ff-8386-e7560140f2d4-audit-dir\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439717 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lggtp\" (UniqueName: \"kubernetes.io/projected/87070b2f-f30b-445a-8134-6708a6e6790e-kube-api-access-lggtp\") pod \"catalog-operator-68c6474976-9gcsg\" (UID: \"87070b2f-f30b-445a-8134-6708a6e6790e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439735 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f725846e-4a85-49f1-9f43-bf53a4f066db-config\") pod \"console-operator-58897d9998-dr5n9\" (UID: \"f725846e-4a85-49f1-9f43-bf53a4f066db\") " pod="openshift-console-operator/console-operator-58897d9998-dr5n9" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439753 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d094612c-465f-4bec-b3f8-bf28a1471217-bound-sa-token\") pod \"ingress-operator-5b745b69d9-t4h9d\" (UID: \"d094612c-465f-4bec-b3f8-bf28a1471217\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439768 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b87g\" (UniqueName: \"kubernetes.io/projected/d094612c-465f-4bec-b3f8-bf28a1471217-kube-api-access-8b87g\") pod \"ingress-operator-5b745b69d9-t4h9d\" (UID: \"d094612c-465f-4bec-b3f8-bf28a1471217\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439783 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f48e137-1959-43ff-8386-e7560140f2d4-serving-cert\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439810 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8edda615-0162-4066-98cc-3dd760855917-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g6rdl\" (UID: \"8edda615-0162-4066-98cc-3dd760855917\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439830 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb747d6b-337e-4040-948b-88b262efd03b-etcd-client\") pod \"etcd-operator-b45778765-szx7x\" (UID: \"bb747d6b-337e-4040-948b-88b262efd03b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439851 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8af278f-dfdc-45c1-84da-df3c70951061-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rzc24\" (UID: \"a8af278f-dfdc-45c1-84da-df3c70951061\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rzc24" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439872 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0f48e137-1959-43ff-8386-e7560140f2d4-image-import-ca\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439888 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26de90f6-24f2-4b9a-b60e-8dcc999890e8-serving-cert\") pod \"authentication-operator-69f744f599-xd97t\" (UID: \"26de90f6-24f2-4b9a-b60e-8dcc999890e8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439903 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1b0919d1-104f-4a33-b5df-d604cfa672f2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cj6c7\" (UID: \"1b0919d1-104f-4a33-b5df-d604cfa672f2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439917 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87070b2f-f30b-445a-8134-6708a6e6790e-srv-cert\") pod \"catalog-operator-68c6474976-9gcsg\" (UID: \"87070b2f-f30b-445a-8134-6708a6e6790e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439936 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d094612c-465f-4bec-b3f8-bf28a1471217-metrics-tls\") pod \"ingress-operator-5b745b69d9-t4h9d\" (UID: \"d094612c-465f-4bec-b3f8-bf28a1471217\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439959 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f725846e-4a85-49f1-9f43-bf53a4f066db-serving-cert\") pod \"console-operator-58897d9998-dr5n9\" (UID: \"f725846e-4a85-49f1-9f43-bf53a4f066db\") " pod="openshift-console-operator/console-operator-58897d9998-dr5n9" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.439987 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm9gf\" (UniqueName: \"kubernetes.io/projected/1b0919d1-104f-4a33-b5df-d604cfa672f2-kube-api-access-sm9gf\") pod \"openshift-config-operator-7777fb866f-cj6c7\" (UID: \"1b0919d1-104f-4a33-b5df-d604cfa672f2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.440004 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mztn\" (UniqueName: \"kubernetes.io/projected/a8af278f-dfdc-45c1-84da-df3c70951061-kube-api-access-5mztn\") pod \"package-server-manager-789f6589d5-rzc24\" (UID: \"a8af278f-dfdc-45c1-84da-df3c70951061\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rzc24" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.440020 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f48e137-1959-43ff-8386-e7560140f2d4-config\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.440049 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sjvr\" (UniqueName: \"kubernetes.io/projected/8edda615-0162-4066-98cc-3dd760855917-kube-api-access-2sjvr\") pod \"cluster-image-registry-operator-dc59b4c8b-g6rdl\" (UID: \"8edda615-0162-4066-98cc-3dd760855917\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.440073 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb747d6b-337e-4040-948b-88b262efd03b-etcd-service-ca\") pod \"etcd-operator-b45778765-szx7x\" (UID: \"bb747d6b-337e-4040-948b-88b262efd03b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.440088 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0f48e137-1959-43ff-8386-e7560140f2d4-etcd-client\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.441941 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bb747d6b-337e-4040-948b-88b262efd03b-etcd-ca\") pod \"etcd-operator-b45778765-szx7x\" (UID: \"bb747d6b-337e-4040-948b-88b262efd03b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.441973 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26de90f6-24f2-4b9a-b60e-8dcc999890e8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xd97t\" (UID: \"26de90f6-24f2-4b9a-b60e-8dcc999890e8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.441967 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8edda615-0162-4066-98cc-3dd760855917-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g6rdl\" (UID: \"8edda615-0162-4066-98cc-3dd760855917\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.442414 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26de90f6-24f2-4b9a-b60e-8dcc999890e8-service-ca-bundle\") pod \"authentication-operator-69f744f599-xd97t\" (UID: \"26de90f6-24f2-4b9a-b60e-8dcc999890e8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.442660 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f725846e-4a85-49f1-9f43-bf53a4f066db-trusted-ca\") pod \"console-operator-58897d9998-dr5n9\" (UID: \"f725846e-4a85-49f1-9f43-bf53a4f066db\") " pod="openshift-console-operator/console-operator-58897d9998-dr5n9" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.443051 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0f48e137-1959-43ff-8386-e7560140f2d4-audit-dir\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.443114 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb747d6b-337e-4040-948b-88b262efd03b-config\") pod \"etcd-operator-b45778765-szx7x\" (UID: \"bb747d6b-337e-4040-948b-88b262efd03b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.443401 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-trusted-ca-bundle\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.443486 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d094612c-465f-4bec-b3f8-bf28a1471217-trusted-ca\") pod \"ingress-operator-5b745b69d9-t4h9d\" (UID: \"d094612c-465f-4bec-b3f8-bf28a1471217\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.443646 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0f48e137-1959-43ff-8386-e7560140f2d4-etcd-serving-ca\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.443728 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f725846e-4a85-49f1-9f43-bf53a4f066db-config\") pod \"console-operator-58897d9998-dr5n9\" (UID: \"f725846e-4a85-49f1-9f43-bf53a4f066db\") " pod="openshift-console-operator/console-operator-58897d9998-dr5n9" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.444119 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f48e137-1959-43ff-8386-e7560140f2d4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.444155 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-service-ca\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.444266 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0f48e137-1959-43ff-8386-e7560140f2d4-node-pullsecrets\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.444523 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0f48e137-1959-43ff-8386-e7560140f2d4-audit\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.445218 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-console-config\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.445469 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26de90f6-24f2-4b9a-b60e-8dcc999890e8-config\") pod \"authentication-operator-69f744f599-xd97t\" (UID: \"26de90f6-24f2-4b9a-b60e-8dcc999890e8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.445524 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-oauth-serving-cert\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.445762 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1b0919d1-104f-4a33-b5df-d604cfa672f2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cj6c7\" (UID: \"1b0919d1-104f-4a33-b5df-d604cfa672f2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.445895 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb747d6b-337e-4040-948b-88b262efd03b-etcd-service-ca\") pod \"etcd-operator-b45778765-szx7x\" (UID: \"bb747d6b-337e-4040-948b-88b262efd03b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.446024 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0f48e137-1959-43ff-8386-e7560140f2d4-etcd-client\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.446120 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f48e137-1959-43ff-8386-e7560140f2d4-config\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.446166 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb747d6b-337e-4040-948b-88b262efd03b-etcd-client\") pod \"etcd-operator-b45778765-szx7x\" (UID: \"bb747d6b-337e-4040-948b-88b262efd03b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.446204 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8edda615-0162-4066-98cc-3dd760855917-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g6rdl\" (UID: \"8edda615-0162-4066-98cc-3dd760855917\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.446374 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0f48e137-1959-43ff-8386-e7560140f2d4-encryption-config\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.446397 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0f48e137-1959-43ff-8386-e7560140f2d4-image-import-ca\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.447146 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b88234f7-8355-4c3b-a5f3-6195bbe46bee-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gdxz7\" (UID: \"b88234f7-8355-4c3b-a5f3-6195bbe46bee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gdxz7" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.447314 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-console-oauth-config\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.447537 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f48e137-1959-43ff-8386-e7560140f2d4-serving-cert\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.447576 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-console-serving-cert\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.448097 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f725846e-4a85-49f1-9f43-bf53a4f066db-serving-cert\") pod \"console-operator-58897d9998-dr5n9\" (UID: \"f725846e-4a85-49f1-9f43-bf53a4f066db\") " pod="openshift-console-operator/console-operator-58897d9998-dr5n9" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.449305 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/87070b2f-f30b-445a-8134-6708a6e6790e-profile-collector-cert\") pod \"catalog-operator-68c6474976-9gcsg\" (UID: \"87070b2f-f30b-445a-8134-6708a6e6790e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.449347 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.449733 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/87070b2f-f30b-445a-8134-6708a6e6790e-srv-cert\") pod \"catalog-operator-68c6474976-9gcsg\" (UID: \"87070b2f-f30b-445a-8134-6708a6e6790e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.451996 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb747d6b-337e-4040-948b-88b262efd03b-serving-cert\") pod \"etcd-operator-b45778765-szx7x\" (UID: \"bb747d6b-337e-4040-948b-88b262efd03b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.452740 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.454810 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d094612c-465f-4bec-b3f8-bf28a1471217-metrics-tls\") pod \"ingress-operator-5b745b69d9-t4h9d\" (UID: \"d094612c-465f-4bec-b3f8-bf28a1471217\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.455375 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26de90f6-24f2-4b9a-b60e-8dcc999890e8-serving-cert\") pod \"authentication-operator-69f744f599-xd97t\" (UID: \"26de90f6-24f2-4b9a-b60e-8dcc999890e8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.473143 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.478584 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8af278f-dfdc-45c1-84da-df3c70951061-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rzc24\" (UID: \"a8af278f-dfdc-45c1-84da-df3c70951061\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rzc24" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.492632 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.513158 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.533034 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.552547 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.560524 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b0919d1-104f-4a33-b5df-d604cfa672f2-serving-cert\") pod \"openshift-config-operator-7777fb866f-cj6c7\" (UID: \"1b0919d1-104f-4a33-b5df-d604cfa672f2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.574447 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.593008 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.612243 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.672780 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.693326 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.713033 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.733248 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.753788 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.772888 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.793093 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.813753 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.832686 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.853028 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.873381 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.893457 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.912912 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.933656 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.954311 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.973383 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 23 00:09:03 crc kubenswrapper[4953]: I0223 00:09:03.994048 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.013810 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.033696 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.053796 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.073998 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.092954 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.113451 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.133422 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.155879 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.173802 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.192842 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.213078 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.233653 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.253307 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.273111 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.291759 4953 request.go:700] Waited for 1.003967143s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&limit=500&resourceVersion=0 Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.293445 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.313522 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.342097 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.352885 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.373796 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.393369 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.413672 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.433796 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.452893 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.473342 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.493788 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.512976 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.532838 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.557508 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.573542 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.593059 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.613232 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.633455 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.653262 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.672483 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.693046 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.719953 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.733248 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.753532 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.773103 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.793800 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.813980 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.834634 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.853089 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.872928 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.893757 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.913638 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.932816 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.953253 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.974376 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 23 00:09:04 crc kubenswrapper[4953]: I0223 00:09:04.993193 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.028467 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncnx8\" (UniqueName: \"kubernetes.io/projected/d5581f85-7931-4952-9fb8-4829ef46865a-kube-api-access-ncnx8\") pod \"machine-approver-56656f9798-v2lhk\" (UID: \"d5581f85-7931-4952-9fb8-4829ef46865a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v2lhk" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.069123 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdkgd\" (UniqueName: \"kubernetes.io/projected/f6643e75-78f7-40fc-b597-d37fa9381727-kube-api-access-xdkgd\") pod \"machine-api-operator-5694c8668f-flc99\" (UID: \"f6643e75-78f7-40fc-b597-d37fa9381727\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-flc99" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.261917 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v2lhk" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.296735 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-flc99" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.406665 4953 request.go:700] Waited for 2.004403029s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.748385 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.748785 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.749465 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.749624 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.749749 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.749889 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.750404 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.750826 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.750845 4953 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.750871 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.750839 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.751111 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.751229 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.751433 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.760436 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j42w\" (UniqueName: \"kubernetes.io/projected/0f48e137-1959-43ff-8386-e7560140f2d4-kube-api-access-4j42w\") pod \"apiserver-76f77b778f-f6ptk\" (UID: \"0f48e137-1959-43ff-8386-e7560140f2d4\") " pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.760921 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h4nf\" (UniqueName: \"kubernetes.io/projected/4d7fd8ab-12ef-4686-887c-3f4acbb5a30b-kube-api-access-2h4nf\") pod \"console-f9d7485db-sbhgq\" (UID: \"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b\") " pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.769895 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.769942 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.769962 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmltz\" (UniqueName: \"kubernetes.io/projected/045ff803-aa45-4faa-b4ee-7f0de4093f04-kube-api-access-hmltz\") pod \"router-default-5444994796-p96zz\" (UID: \"045ff803-aa45-4faa-b4ee-7f0de4093f04\") " pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.769978 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qqxq\" (UniqueName: \"kubernetes.io/projected/2c463426-aee8-41c2-8f08-e553efa4742a-kube-api-access-4qqxq\") pod \"image-pruner-29530080-nw6hr\" (UID: \"2c463426-aee8-41c2-8f08-e553efa4742a\") " pod="openshift-image-registry/image-pruner-29530080-nw6hr" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770000 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-registry-certificates\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770022 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770037 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/045ff803-aa45-4faa-b4ee-7f0de4093f04-stats-auth\") pod \"router-default-5444994796-p96zz\" (UID: \"045ff803-aa45-4faa-b4ee-7f0de4093f04\") " pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770052 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zlxf\" (UniqueName: \"kubernetes.io/projected/ddd17504-ce95-4253-8c88-1f5cf50f9184-kube-api-access-6zlxf\") pod \"migrator-59844c95c7-f62s6\" (UID: \"ddd17504-ce95-4253-8c88-1f5cf50f9184\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f62s6" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770066 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770083 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770098 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/045ff803-aa45-4faa-b4ee-7f0de4093f04-metrics-certs\") pod \"router-default-5444994796-p96zz\" (UID: \"045ff803-aa45-4faa-b4ee-7f0de4093f04\") " pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770112 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/045ff803-aa45-4faa-b4ee-7f0de4093f04-service-ca-bundle\") pod \"router-default-5444994796-p96zz\" (UID: \"045ff803-aa45-4faa-b4ee-7f0de4093f04\") " pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770144 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770159 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770173 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60b432c5-5097-40f3-983b-3a2355744ee3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mdkxk\" (UID: \"60b432c5-5097-40f3-983b-3a2355744ee3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdkxk" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770191 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5w42\" (UniqueName: \"kubernetes.io/projected/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-kube-api-access-d5w42\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770205 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-bound-sa-token\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770221 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vjqf\" (UniqueName: \"kubernetes.io/projected/c90b2280-0314-4b8a-979f-d678ee9a4a98-kube-api-access-8vjqf\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770235 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770251 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770267 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-registry-tls\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770325 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-296p5\" (UniqueName: \"kubernetes.io/projected/60b432c5-5097-40f3-983b-3a2355744ee3-kube-api-access-296p5\") pod \"openshift-apiserver-operator-796bbdcf4f-mdkxk\" (UID: \"60b432c5-5097-40f3-983b-3a2355744ee3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdkxk" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770341 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/045ff803-aa45-4faa-b4ee-7f0de4093f04-default-certificate\") pod \"router-default-5444994796-p96zz\" (UID: \"045ff803-aa45-4faa-b4ee-7f0de4093f04\") " pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770355 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60b432c5-5097-40f3-983b-3a2355744ee3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mdkxk\" (UID: \"60b432c5-5097-40f3-983b-3a2355744ee3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdkxk" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770388 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2e02a9c4-892c-419b-af9c-c8afc2158051-profile-collector-cert\") pod \"olm-operator-6b444d44fb-n5rdt\" (UID: \"2e02a9c4-892c-419b-af9c-c8afc2158051\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770404 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770425 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c90b2280-0314-4b8a-979f-d678ee9a4a98-audit-dir\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770446 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770495 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v87h\" (UniqueName: \"kubernetes.io/projected/2e02a9c4-892c-419b-af9c-c8afc2158051-kube-api-access-4v87h\") pod \"olm-operator-6b444d44fb-n5rdt\" (UID: \"2e02a9c4-892c-419b-af9c-c8afc2158051\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770523 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-audit-policies\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770538 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770556 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e02a9c4-892c-419b-af9c-c8afc2158051-srv-cert\") pod \"olm-operator-6b444d44fb-n5rdt\" (UID: \"2e02a9c4-892c-419b-af9c-c8afc2158051\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770691 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770723 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770739 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-trusted-ca\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.770767 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2c463426-aee8-41c2-8f08-e553efa4742a-serviceca\") pod \"image-pruner-29530080-nw6hr\" (UID: \"2c463426-aee8-41c2-8f08-e553efa4742a\") " pod="openshift-image-registry/image-pruner-29530080-nw6hr" Feb 23 00:09:05 crc kubenswrapper[4953]: E0223 00:09:05.771276 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:06.271265533 +0000 UTC m=+144.205107379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.772411 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8edda615-0162-4066-98cc-3dd760855917-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g6rdl\" (UID: \"8edda615-0162-4066-98cc-3dd760855917\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.777641 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts4wn\" (UniqueName: \"kubernetes.io/projected/f725846e-4a85-49f1-9f43-bf53a4f066db-kube-api-access-ts4wn\") pod \"console-operator-58897d9998-dr5n9\" (UID: \"f725846e-4a85-49f1-9f43-bf53a4f066db\") " pod="openshift-console-operator/console-operator-58897d9998-dr5n9" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.782547 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27hpj\" (UniqueName: \"kubernetes.io/projected/b88234f7-8355-4c3b-a5f3-6195bbe46bee-kube-api-access-27hpj\") pod \"cluster-samples-operator-665b6dd947-gdxz7\" (UID: \"b88234f7-8355-4c3b-a5f3-6195bbe46bee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gdxz7" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.782832 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-dr5n9" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.785136 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gqn7\" (UniqueName: \"kubernetes.io/projected/7402d900-4b66-4c9d-8904-b35c3d4b06c7-kube-api-access-4gqn7\") pod \"apiserver-7bbb656c7d-2zh26\" (UID: \"7402d900-4b66-4c9d-8904-b35c3d4b06c7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.787156 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sjvr\" (UniqueName: \"kubernetes.io/projected/8edda615-0162-4066-98cc-3dd760855917-kube-api-access-2sjvr\") pod \"cluster-image-registry-operator-dc59b4c8b-g6rdl\" (UID: \"8edda615-0162-4066-98cc-3dd760855917\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.788091 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lggtp\" (UniqueName: \"kubernetes.io/projected/87070b2f-f30b-445a-8134-6708a6e6790e-kube-api-access-lggtp\") pod \"catalog-operator-68c6474976-9gcsg\" (UID: \"87070b2f-f30b-445a-8134-6708a6e6790e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.788550 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrj2v\" (UniqueName: \"kubernetes.io/projected/bb747d6b-337e-4040-948b-88b262efd03b-kube-api-access-jrj2v\") pod \"etcd-operator-b45778765-szx7x\" (UID: \"bb747d6b-337e-4040-948b-88b262efd03b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.792525 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d094612c-465f-4bec-b3f8-bf28a1471217-bound-sa-token\") pod \"ingress-operator-5b745b69d9-t4h9d\" (UID: \"d094612c-465f-4bec-b3f8-bf28a1471217\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.794041 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mztn\" (UniqueName: \"kubernetes.io/projected/a8af278f-dfdc-45c1-84da-df3c70951061-kube-api-access-5mztn\") pod \"package-server-manager-789f6589d5-rzc24\" (UID: \"a8af278f-dfdc-45c1-84da-df3c70951061\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rzc24" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.794465 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vnr4\" (UniqueName: \"kubernetes.io/projected/26de90f6-24f2-4b9a-b60e-8dcc999890e8-kube-api-access-5vnr4\") pod \"authentication-operator-69f744f599-xd97t\" (UID: \"26de90f6-24f2-4b9a-b60e-8dcc999890e8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.797052 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rzc24" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.800075 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b87g\" (UniqueName: \"kubernetes.io/projected/d094612c-465f-4bec-b3f8-bf28a1471217-kube-api-access-8b87g\") pod \"ingress-operator-5b745b69d9-t4h9d\" (UID: \"d094612c-465f-4bec-b3f8-bf28a1471217\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.800123 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm9gf\" (UniqueName: \"kubernetes.io/projected/1b0919d1-104f-4a33-b5df-d604cfa672f2-kube-api-access-sm9gf\") pod \"openshift-config-operator-7777fb866f-cj6c7\" (UID: \"1b0919d1-104f-4a33-b5df-d604cfa672f2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.802191 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.872528 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.872645 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vjqf\" (UniqueName: \"kubernetes.io/projected/c90b2280-0314-4b8a-979f-d678ee9a4a98-kube-api-access-8vjqf\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.872673 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t882k\" (UID: \"f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t882k" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.872696 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/64893275-e793-4afa-9736-e52ec6ecf447-node-bootstrap-token\") pod \"machine-config-server-dztnb\" (UID: \"64893275-e793-4afa-9736-e52ec6ecf447\") " pod="openshift-machine-config-operator/machine-config-server-dztnb" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.872716 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae22cab2-d791-4513-8794-e5d93b7447e5-client-ca\") pod \"route-controller-manager-6576b87f9c-zblkc\" (UID: \"ae22cab2-d791-4513-8794-e5d93b7447e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" Feb 23 00:09:05 crc kubenswrapper[4953]: E0223 00:09:05.872741 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:06.372713647 +0000 UTC m=+144.306555513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.872782 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-registry-tls\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.872815 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.872840 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15299e81-114c-47b9-a589-1a8d78426736-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8gp45\" (UID: \"15299e81-114c-47b9-a589-1a8d78426736\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gp45" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.872858 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-296p5\" (UniqueName: \"kubernetes.io/projected/60b432c5-5097-40f3-983b-3a2355744ee3-kube-api-access-296p5\") pod \"openshift-apiserver-operator-796bbdcf4f-mdkxk\" (UID: \"60b432c5-5097-40f3-983b-3a2355744ee3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdkxk" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.872906 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03337365-2181-4cf3-90d6-6664103f220b-config-volume\") pod \"dns-default-vr4sx\" (UID: \"03337365-2181-4cf3-90d6-6664103f220b\") " pod="openshift-dns/dns-default-vr4sx" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.872943 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q5w7\" (UniqueName: \"kubernetes.io/projected/bd9d22e8-4e02-41dd-926d-d95d15beceeb-kube-api-access-4q5w7\") pod \"kube-storage-version-migrator-operator-b67b599dd-r9sjq\" (UID: \"bd9d22e8-4e02-41dd-926d-d95d15beceeb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9sjq" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.872958 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/486601c8-1d5e-4805-8ab4-d4a55de883f9-signing-cabundle\") pod \"service-ca-9c57cc56f-t92pb\" (UID: \"486601c8-1d5e-4805-8ab4-d4a55de883f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-t92pb" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.872977 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2e02a9c4-892c-419b-af9c-c8afc2158051-profile-collector-cert\") pod \"olm-operator-6b444d44fb-n5rdt\" (UID: \"2e02a9c4-892c-419b-af9c-c8afc2158051\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.872993 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z98mq\" (UniqueName: \"kubernetes.io/projected/2e231e84-62b1-447f-b8bd-713b1027e045-kube-api-access-z98mq\") pod \"machine-config-operator-74547568cd-cb4fk\" (UID: \"2e231e84-62b1-447f-b8bd-713b1027e045\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873050 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqd6c\" (UniqueName: \"kubernetes.io/projected/486601c8-1d5e-4805-8ab4-d4a55de883f9-kube-api-access-mqd6c\") pod \"service-ca-9c57cc56f-t92pb\" (UID: \"486601c8-1d5e-4805-8ab4-d4a55de883f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-t92pb" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873082 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e7cca116-4f86-480b-9186-651912ae24d1-mountpoint-dir\") pod \"csi-hostpathplugin-n5dbh\" (UID: \"e7cca116-4f86-480b-9186-651912ae24d1\") " pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873125 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsv4b\" (UniqueName: \"kubernetes.io/projected/69abc20a-54cb-47c6-884d-e12fd1984fdb-kube-api-access-jsv4b\") pod \"marketplace-operator-79b997595-pjdts\" (UID: \"69abc20a-54cb-47c6-884d-e12fd1984fdb\") " pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873152 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae22cab2-d791-4513-8794-e5d93b7447e5-serving-cert\") pod \"route-controller-manager-6576b87f9c-zblkc\" (UID: \"ae22cab2-d791-4513-8794-e5d93b7447e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873166 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e7cca116-4f86-480b-9186-651912ae24d1-plugins-dir\") pod \"csi-hostpathplugin-n5dbh\" (UID: \"e7cca116-4f86-480b-9186-651912ae24d1\") " pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873196 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e02a9c4-892c-419b-af9c-c8afc2158051-srv-cert\") pod \"olm-operator-6b444d44fb-n5rdt\" (UID: \"2e02a9c4-892c-419b-af9c-c8afc2158051\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873213 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/957f8efa-8db6-4dd9-969d-2ec80e8c6f7a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-t5h8l\" (UID: \"957f8efa-8db6-4dd9-969d-2ec80e8c6f7a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t5h8l" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873231 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdftp\" (UniqueName: \"kubernetes.io/projected/28f92e4d-87a5-4cb4-9f42-1301b5d4fc31-kube-api-access-jdftp\") pod \"multus-admission-controller-857f4d67dd-9x56p\" (UID: \"28f92e4d-87a5-4cb4-9f42-1301b5d4fc31\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9x56p" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873248 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhxds\" (UniqueName: \"kubernetes.io/projected/0ff76a9d-67aa-4d7b-b21d-5df9ff4a03c2-kube-api-access-nhxds\") pod \"dns-operator-744455d44c-x9fkx\" (UID: \"0ff76a9d-67aa-4d7b-b21d-5df9ff4a03c2\") " pod="openshift-dns-operator/dns-operator-744455d44c-x9fkx" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873269 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6769w\" (UniqueName: \"kubernetes.io/projected/e7cca116-4f86-480b-9186-651912ae24d1-kube-api-access-6769w\") pod \"csi-hostpathplugin-n5dbh\" (UID: \"e7cca116-4f86-480b-9186-651912ae24d1\") " pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873321 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873338 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-trusted-ca\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873355 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz558\" (UniqueName: \"kubernetes.io/projected/6b7576be-af0b-4553-bc96-125e87709ad1-kube-api-access-rz558\") pod \"control-plane-machine-set-operator-78cbb6b69f-pjp8z\" (UID: \"6b7576be-af0b-4553-bc96-125e87709ad1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjp8z" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873372 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419-config\") pod \"service-ca-operator-777779d784-mrwl8\" (UID: \"dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mrwl8" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873387 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b63716df-3a08-4a47-8bd6-64827459651e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dzswb\" (UID: \"b63716df-3a08-4a47-8bd6-64827459651e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dzswb" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873405 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2c463426-aee8-41c2-8f08-e553efa4742a-serviceca\") pod \"image-pruner-29530080-nw6hr\" (UID: \"2c463426-aee8-41c2-8f08-e553efa4742a\") " pod="openshift-image-registry/image-pruner-29530080-nw6hr" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873428 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b87923c-05c9-40bd-a84f-6b6462883363-cert\") pod \"ingress-canary-hwdnz\" (UID: \"0b87923c-05c9-40bd-a84f-6b6462883363\") " pod="openshift-ingress-canary/ingress-canary-hwdnz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873443 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz2hr\" (UniqueName: \"kubernetes.io/projected/b63716df-3a08-4a47-8bd6-64827459651e-kube-api-access-vz2hr\") pod \"machine-config-controller-84d6567774-dzswb\" (UID: \"b63716df-3a08-4a47-8bd6-64827459651e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dzswb" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873470 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873507 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwk94\" (UniqueName: \"kubernetes.io/projected/2a9783d6-f83d-4e9c-bbe8-815b994fede2-kube-api-access-pwk94\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4d7k\" (UID: \"2a9783d6-f83d-4e9c-bbe8-815b994fede2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4d7k" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873525 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873541 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be696be4-2c84-4434-9d93-804c9bb6604b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vtp7m\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873557 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419-serving-cert\") pod \"service-ca-operator-777779d784-mrwl8\" (UID: \"dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mrwl8" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873599 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmltz\" (UniqueName: \"kubernetes.io/projected/045ff803-aa45-4faa-b4ee-7f0de4093f04-kube-api-access-hmltz\") pod \"router-default-5444994796-p96zz\" (UID: \"045ff803-aa45-4faa-b4ee-7f0de4093f04\") " pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873632 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cedfc67a-fc52-4c43-8962-619b6e436d60-tmpfs\") pod \"packageserver-d55dfcdfc-kqn62\" (UID: \"cedfc67a-fc52-4c43-8962-619b6e436d60\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873667 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-registry-certificates\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873691 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qqxq\" (UniqueName: \"kubernetes.io/projected/2c463426-aee8-41c2-8f08-e553efa4742a-kube-api-access-4qqxq\") pod \"image-pruner-29530080-nw6hr\" (UID: \"2c463426-aee8-41c2-8f08-e553efa4742a\") " pod="openshift-image-registry/image-pruner-29530080-nw6hr" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873733 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873754 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be696be4-2c84-4434-9d93-804c9bb6604b-serving-cert\") pod \"controller-manager-879f6c89f-vtp7m\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873773 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cedfc67a-fc52-4c43-8962-619b6e436d60-apiservice-cert\") pod \"packageserver-d55dfcdfc-kqn62\" (UID: \"cedfc67a-fc52-4c43-8962-619b6e436d60\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873825 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a9783d6-f83d-4e9c-bbe8-815b994fede2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4d7k\" (UID: \"2a9783d6-f83d-4e9c-bbe8-815b994fede2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4d7k" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873864 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zlxf\" (UniqueName: \"kubernetes.io/projected/ddd17504-ce95-4253-8c88-1f5cf50f9184-kube-api-access-6zlxf\") pod \"migrator-59844c95c7-f62s6\" (UID: \"ddd17504-ce95-4253-8c88-1f5cf50f9184\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f62s6" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873887 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/28f92e4d-87a5-4cb4-9f42-1301b5d4fc31-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9x56p\" (UID: \"28f92e4d-87a5-4cb4-9f42-1301b5d4fc31\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9x56p" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873908 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873930 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/957f8efa-8db6-4dd9-969d-2ec80e8c6f7a-config\") pod \"kube-controller-manager-operator-78b949d7b-t5h8l\" (UID: \"957f8efa-8db6-4dd9-969d-2ec80e8c6f7a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t5h8l" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873950 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873970 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/045ff803-aa45-4faa-b4ee-7f0de4093f04-metrics-certs\") pod \"router-default-5444994796-p96zz\" (UID: \"045ff803-aa45-4faa-b4ee-7f0de4093f04\") " pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.873990 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15299e81-114c-47b9-a589-1a8d78426736-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8gp45\" (UID: \"15299e81-114c-47b9-a589-1a8d78426736\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gp45" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.874022 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/045ff803-aa45-4faa-b4ee-7f0de4093f04-service-ca-bundle\") pod \"router-default-5444994796-p96zz\" (UID: \"045ff803-aa45-4faa-b4ee-7f0de4093f04\") " pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.874068 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60b432c5-5097-40f3-983b-3a2355744ee3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mdkxk\" (UID: \"60b432c5-5097-40f3-983b-3a2355744ee3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdkxk" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.874091 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf77c\" (UniqueName: \"kubernetes.io/projected/be696be4-2c84-4434-9d93-804c9bb6604b-kube-api-access-qf77c\") pod \"controller-manager-879f6c89f-vtp7m\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.874129 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5w42\" (UniqueName: \"kubernetes.io/projected/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-kube-api-access-d5w42\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.874170 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be696be4-2c84-4434-9d93-804c9bb6604b-client-ca\") pod \"controller-manager-879f6c89f-vtp7m\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.874272 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/957f8efa-8db6-4dd9-969d-2ec80e8c6f7a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-t5h8l\" (UID: \"957f8efa-8db6-4dd9-969d-2ec80e8c6f7a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t5h8l" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.874522 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.878689 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2c463426-aee8-41c2-8f08-e553efa4742a-serviceca\") pod \"image-pruner-29530080-nw6hr\" (UID: \"2c463426-aee8-41c2-8f08-e553efa4742a\") " pod="openshift-image-registry/image-pruner-29530080-nw6hr" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.881266 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-registry-certificates\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.881870 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: E0223 00:09:05.892351 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:06.392331529 +0000 UTC m=+144.326173375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906072 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906117 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2e231e84-62b1-447f-b8bd-713b1027e045-images\") pod \"machine-config-operator-74547568cd-cb4fk\" (UID: \"2e231e84-62b1-447f-b8bd-713b1027e045\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906140 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69abc20a-54cb-47c6-884d-e12fd1984fdb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pjdts\" (UID: \"69abc20a-54cb-47c6-884d-e12fd1984fdb\") " pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906176 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/045ff803-aa45-4faa-b4ee-7f0de4093f04-default-certificate\") pod \"router-default-5444994796-p96zz\" (UID: \"045ff803-aa45-4faa-b4ee-7f0de4093f04\") " pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906197 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2e231e84-62b1-447f-b8bd-713b1027e045-proxy-tls\") pod \"machine-config-operator-74547568cd-cb4fk\" (UID: \"2e231e84-62b1-447f-b8bd-713b1027e045\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906217 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b63716df-3a08-4a47-8bd6-64827459651e-proxy-tls\") pod \"machine-config-controller-84d6567774-dzswb\" (UID: \"b63716df-3a08-4a47-8bd6-64827459651e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dzswb" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906241 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60b432c5-5097-40f3-983b-3a2355744ee3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mdkxk\" (UID: \"60b432c5-5097-40f3-983b-3a2355744ee3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdkxk" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906260 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb671439-af50-4c0d-8f7c-d92b3571b2b0-config-volume\") pod \"collect-profiles-29530080-tpjsc\" (UID: \"cb671439-af50-4c0d-8f7c-d92b3571b2b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906278 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/03337365-2181-4cf3-90d6-6664103f220b-metrics-tls\") pod \"dns-default-vr4sx\" (UID: \"03337365-2181-4cf3-90d6-6664103f220b\") " pod="openshift-dns/dns-default-vr4sx" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906320 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906343 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c90b2280-0314-4b8a-979f-d678ee9a4a98-audit-dir\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906364 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906386 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e7cca116-4f86-480b-9186-651912ae24d1-socket-dir\") pod \"csi-hostpathplugin-n5dbh\" (UID: \"e7cca116-4f86-480b-9186-651912ae24d1\") " pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906430 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b59cp\" (UniqueName: \"kubernetes.io/projected/64893275-e793-4afa-9736-e52ec6ecf447-kube-api-access-b59cp\") pod \"machine-config-server-dztnb\" (UID: \"64893275-e793-4afa-9736-e52ec6ecf447\") " pod="openshift-machine-config-operator/machine-config-server-dztnb" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906466 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-audit-policies\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906489 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906511 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v87h\" (UniqueName: \"kubernetes.io/projected/2e02a9c4-892c-419b-af9c-c8afc2158051-kube-api-access-4v87h\") pod \"olm-operator-6b444d44fb-n5rdt\" (UID: \"2e02a9c4-892c-419b-af9c-c8afc2158051\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906535 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t882k\" (UID: \"f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t882k" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906559 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gmlm\" (UniqueName: \"kubernetes.io/projected/ae22cab2-d791-4513-8794-e5d93b7447e5-kube-api-access-4gmlm\") pod \"route-controller-manager-6576b87f9c-zblkc\" (UID: \"ae22cab2-d791-4513-8794-e5d93b7447e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906650 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906674 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd9d22e8-4e02-41dd-926d-d95d15beceeb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-r9sjq\" (UID: \"bd9d22e8-4e02-41dd-926d-d95d15beceeb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9sjq" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906699 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7576be-af0b-4553-bc96-125e87709ad1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pjp8z\" (UID: \"6b7576be-af0b-4553-bc96-125e87709ad1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjp8z" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906723 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ff76a9d-67aa-4d7b-b21d-5df9ff4a03c2-metrics-tls\") pod \"dns-operator-744455d44c-x9fkx\" (UID: \"0ff76a9d-67aa-4d7b-b21d-5df9ff4a03c2\") " pod="openshift-dns-operator/dns-operator-744455d44c-x9fkx" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906745 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d85cm\" (UniqueName: \"kubernetes.io/projected/dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419-kube-api-access-d85cm\") pod \"service-ca-operator-777779d784-mrwl8\" (UID: \"dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mrwl8" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906796 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/64893275-e793-4afa-9736-e52ec6ecf447-certs\") pod \"machine-config-server-dztnb\" (UID: \"64893275-e793-4afa-9736-e52ec6ecf447\") " pod="openshift-machine-config-operator/machine-config-server-dztnb" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906833 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9d22e8-4e02-41dd-926d-d95d15beceeb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-r9sjq\" (UID: \"bd9d22e8-4e02-41dd-926d-d95d15beceeb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9sjq" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906857 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2e231e84-62b1-447f-b8bd-713b1027e045-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cb4fk\" (UID: \"2e231e84-62b1-447f-b8bd-713b1027e045\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906880 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be696be4-2c84-4434-9d93-804c9bb6604b-config\") pod \"controller-manager-879f6c89f-vtp7m\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906905 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280-config\") pod \"kube-apiserver-operator-766d6c64bb-t882k\" (UID: \"f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t882k" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906941 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15299e81-114c-47b9-a589-1a8d78426736-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8gp45\" (UID: \"15299e81-114c-47b9-a589-1a8d78426736\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gp45" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.906978 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/486601c8-1d5e-4805-8ab4-d4a55de883f9-signing-key\") pod \"service-ca-9c57cc56f-t92pb\" (UID: \"486601c8-1d5e-4805-8ab4-d4a55de883f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-t92pb" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.907001 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzj2k\" (UniqueName: \"kubernetes.io/projected/0b87923c-05c9-40bd-a84f-6b6462883363-kube-api-access-qzj2k\") pod \"ingress-canary-hwdnz\" (UID: \"0b87923c-05c9-40bd-a84f-6b6462883363\") " pod="openshift-ingress-canary/ingress-canary-hwdnz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.907037 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb671439-af50-4c0d-8f7c-d92b3571b2b0-secret-volume\") pod \"collect-profiles-29530080-tpjsc\" (UID: \"cb671439-af50-4c0d-8f7c-d92b3571b2b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.907059 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cedfc67a-fc52-4c43-8962-619b6e436d60-webhook-cert\") pod \"packageserver-d55dfcdfc-kqn62\" (UID: \"cedfc67a-fc52-4c43-8962-619b6e436d60\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.907083 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvm94\" (UniqueName: \"kubernetes.io/projected/ef7c633a-9dcf-4613-9106-3c1f07a6afab-kube-api-access-zvm94\") pod \"downloads-7954f5f757-blqz2\" (UID: \"ef7c633a-9dcf-4613-9106-3c1f07a6afab\") " pod="openshift-console/downloads-7954f5f757-blqz2" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.907103 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e7cca116-4f86-480b-9186-651912ae24d1-csi-data-dir\") pod \"csi-hostpathplugin-n5dbh\" (UID: \"e7cca116-4f86-480b-9186-651912ae24d1\") " pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.907126 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a9783d6-f83d-4e9c-bbe8-815b994fede2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4d7k\" (UID: \"2a9783d6-f83d-4e9c-bbe8-815b994fede2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4d7k" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.907162 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/045ff803-aa45-4faa-b4ee-7f0de4093f04-stats-auth\") pod \"router-default-5444994796-p96zz\" (UID: \"045ff803-aa45-4faa-b4ee-7f0de4093f04\") " pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.907186 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69abc20a-54cb-47c6-884d-e12fd1984fdb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pjdts\" (UID: \"69abc20a-54cb-47c6-884d-e12fd1984fdb\") " pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.907209 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae22cab2-d791-4513-8794-e5d93b7447e5-config\") pod \"route-controller-manager-6576b87f9c-zblkc\" (UID: \"ae22cab2-d791-4513-8794-e5d93b7447e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.907339 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.907367 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.907391 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e7cca116-4f86-480b-9186-651912ae24d1-registration-dir\") pod \"csi-hostpathplugin-n5dbh\" (UID: \"e7cca116-4f86-480b-9186-651912ae24d1\") " pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.907417 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qzkm\" (UniqueName: \"kubernetes.io/projected/cb671439-af50-4c0d-8f7c-d92b3571b2b0-kube-api-access-5qzkm\") pod \"collect-profiles-29530080-tpjsc\" (UID: \"cb671439-af50-4c0d-8f7c-d92b3571b2b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.907439 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz5lw\" (UniqueName: \"kubernetes.io/projected/03337365-2181-4cf3-90d6-6664103f220b-kube-api-access-zz5lw\") pod \"dns-default-vr4sx\" (UID: \"03337365-2181-4cf3-90d6-6664103f220b\") " pod="openshift-dns/dns-default-vr4sx" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.907460 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cl2q\" (UniqueName: \"kubernetes.io/projected/cedfc67a-fc52-4c43-8962-619b6e436d60-kube-api-access-4cl2q\") pod \"packageserver-d55dfcdfc-kqn62\" (UID: \"cedfc67a-fc52-4c43-8962-619b6e436d60\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.907484 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-bound-sa-token\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.893053 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/045ff803-aa45-4faa-b4ee-7f0de4093f04-metrics-certs\") pod \"router-default-5444994796-p96zz\" (UID: \"045ff803-aa45-4faa-b4ee-7f0de4093f04\") " pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.895265 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60b432c5-5097-40f3-983b-3a2355744ee3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mdkxk\" (UID: \"60b432c5-5097-40f3-983b-3a2355744ee3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdkxk" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.896066 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.896450 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.918441 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/045ff803-aa45-4faa-b4ee-7f0de4093f04-stats-auth\") pod \"router-default-5444994796-p96zz\" (UID: \"045ff803-aa45-4faa-b4ee-7f0de4093f04\") " pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.905903 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/045ff803-aa45-4faa-b4ee-7f0de4093f04-service-ca-bundle\") pod \"router-default-5444994796-p96zz\" (UID: \"045ff803-aa45-4faa-b4ee-7f0de4093f04\") " pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.918762 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/045ff803-aa45-4faa-b4ee-7f0de4093f04-default-certificate\") pod \"router-default-5444994796-p96zz\" (UID: \"045ff803-aa45-4faa-b4ee-7f0de4093f04\") " pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.919335 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-audit-policies\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.919402 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-trusted-ca\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.919474 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c90b2280-0314-4b8a-979f-d678ee9a4a98-audit-dir\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.919752 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.919909 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.921070 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.925660 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.925942 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.926064 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.926503 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-registry-tls\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.926539 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e02a9c4-892c-419b-af9c-c8afc2158051-srv-cert\") pod \"olm-operator-6b444d44fb-n5rdt\" (UID: \"2e02a9c4-892c-419b-af9c-c8afc2158051\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.926772 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.927135 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2e02a9c4-892c-419b-af9c-c8afc2158051-profile-collector-cert\") pod \"olm-operator-6b444d44fb-n5rdt\" (UID: \"2e02a9c4-892c-419b-af9c-c8afc2158051\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.929640 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60b432c5-5097-40f3-983b-3a2355744ee3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mdkxk\" (UID: \"60b432c5-5097-40f3-983b-3a2355744ee3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdkxk" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.935531 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.938021 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vjqf\" (UniqueName: \"kubernetes.io/projected/c90b2280-0314-4b8a-979f-d678ee9a4a98-kube-api-access-8vjqf\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.940687 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.940958 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmltz\" (UniqueName: \"kubernetes.io/projected/045ff803-aa45-4faa-b4ee-7f0de4093f04-kube-api-access-hmltz\") pod \"router-default-5444994796-p96zz\" (UID: \"045ff803-aa45-4faa-b4ee-7f0de4093f04\") " pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.941759 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sb7m5\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.943995 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qqxq\" (UniqueName: \"kubernetes.io/projected/2c463426-aee8-41c2-8f08-e553efa4742a-kube-api-access-4qqxq\") pod \"image-pruner-29530080-nw6hr\" (UID: \"2c463426-aee8-41c2-8f08-e553efa4742a\") " pod="openshift-image-registry/image-pruner-29530080-nw6hr" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.949019 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-296p5\" (UniqueName: \"kubernetes.io/projected/60b432c5-5097-40f3-983b-3a2355744ee3-kube-api-access-296p5\") pod \"openshift-apiserver-operator-796bbdcf4f-mdkxk\" (UID: \"60b432c5-5097-40f3-983b-3a2355744ee3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdkxk" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.949487 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zlxf\" (UniqueName: \"kubernetes.io/projected/ddd17504-ce95-4253-8c88-1f5cf50f9184-kube-api-access-6zlxf\") pod \"migrator-59844c95c7-f62s6\" (UID: \"ddd17504-ce95-4253-8c88-1f5cf50f9184\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f62s6" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.958031 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.958813 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.962442 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29530080-nw6hr" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.964103 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-flc99"] Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.970926 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gdxz7" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.977056 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5w42\" (UniqueName: \"kubernetes.io/projected/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-kube-api-access-d5w42\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.983321 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.994415 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdkxk" Feb 23 00:09:05 crc kubenswrapper[4953]: I0223 00:09:05.995094 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-bound-sa-token\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.006861 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.007184 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dr5n9"] Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.008727 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009032 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gmlm\" (UniqueName: \"kubernetes.io/projected/ae22cab2-d791-4513-8794-e5d93b7447e5-kube-api-access-4gmlm\") pod \"route-controller-manager-6576b87f9c-zblkc\" (UID: \"ae22cab2-d791-4513-8794-e5d93b7447e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009072 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t882k\" (UID: \"f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t882k" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009100 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd9d22e8-4e02-41dd-926d-d95d15beceeb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-r9sjq\" (UID: \"bd9d22e8-4e02-41dd-926d-d95d15beceeb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9sjq" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009124 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7576be-af0b-4553-bc96-125e87709ad1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pjp8z\" (UID: \"6b7576be-af0b-4553-bc96-125e87709ad1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjp8z" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009162 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ff76a9d-67aa-4d7b-b21d-5df9ff4a03c2-metrics-tls\") pod \"dns-operator-744455d44c-x9fkx\" (UID: \"0ff76a9d-67aa-4d7b-b21d-5df9ff4a03c2\") " pod="openshift-dns-operator/dns-operator-744455d44c-x9fkx" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009189 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d85cm\" (UniqueName: \"kubernetes.io/projected/dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419-kube-api-access-d85cm\") pod \"service-ca-operator-777779d784-mrwl8\" (UID: \"dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mrwl8" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009212 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/64893275-e793-4afa-9736-e52ec6ecf447-certs\") pod \"machine-config-server-dztnb\" (UID: \"64893275-e793-4afa-9736-e52ec6ecf447\") " pod="openshift-machine-config-operator/machine-config-server-dztnb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009235 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9d22e8-4e02-41dd-926d-d95d15beceeb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-r9sjq\" (UID: \"bd9d22e8-4e02-41dd-926d-d95d15beceeb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9sjq" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009258 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2e231e84-62b1-447f-b8bd-713b1027e045-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cb4fk\" (UID: \"2e231e84-62b1-447f-b8bd-713b1027e045\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009282 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be696be4-2c84-4434-9d93-804c9bb6604b-config\") pod \"controller-manager-879f6c89f-vtp7m\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009321 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280-config\") pod \"kube-apiserver-operator-766d6c64bb-t882k\" (UID: \"f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t882k" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009344 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzj2k\" (UniqueName: \"kubernetes.io/projected/0b87923c-05c9-40bd-a84f-6b6462883363-kube-api-access-qzj2k\") pod \"ingress-canary-hwdnz\" (UID: \"0b87923c-05c9-40bd-a84f-6b6462883363\") " pod="openshift-ingress-canary/ingress-canary-hwdnz" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009367 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15299e81-114c-47b9-a589-1a8d78426736-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8gp45\" (UID: \"15299e81-114c-47b9-a589-1a8d78426736\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gp45" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009390 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/486601c8-1d5e-4805-8ab4-d4a55de883f9-signing-key\") pod \"service-ca-9c57cc56f-t92pb\" (UID: \"486601c8-1d5e-4805-8ab4-d4a55de883f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-t92pb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009414 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb671439-af50-4c0d-8f7c-d92b3571b2b0-secret-volume\") pod \"collect-profiles-29530080-tpjsc\" (UID: \"cb671439-af50-4c0d-8f7c-d92b3571b2b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009436 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cedfc67a-fc52-4c43-8962-619b6e436d60-webhook-cert\") pod \"packageserver-d55dfcdfc-kqn62\" (UID: \"cedfc67a-fc52-4c43-8962-619b6e436d60\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009461 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvm94\" (UniqueName: \"kubernetes.io/projected/ef7c633a-9dcf-4613-9106-3c1f07a6afab-kube-api-access-zvm94\") pod \"downloads-7954f5f757-blqz2\" (UID: \"ef7c633a-9dcf-4613-9106-3c1f07a6afab\") " pod="openshift-console/downloads-7954f5f757-blqz2" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009483 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e7cca116-4f86-480b-9186-651912ae24d1-csi-data-dir\") pod \"csi-hostpathplugin-n5dbh\" (UID: \"e7cca116-4f86-480b-9186-651912ae24d1\") " pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009504 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a9783d6-f83d-4e9c-bbe8-815b994fede2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4d7k\" (UID: \"2a9783d6-f83d-4e9c-bbe8-815b994fede2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4d7k" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009527 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69abc20a-54cb-47c6-884d-e12fd1984fdb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pjdts\" (UID: \"69abc20a-54cb-47c6-884d-e12fd1984fdb\") " pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009550 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae22cab2-d791-4513-8794-e5d93b7447e5-config\") pod \"route-controller-manager-6576b87f9c-zblkc\" (UID: \"ae22cab2-d791-4513-8794-e5d93b7447e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" Feb 23 00:09:06 crc kubenswrapper[4953]: E0223 00:09:06.009580 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:06.509561522 +0000 UTC m=+144.443403368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009625 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e7cca116-4f86-480b-9186-651912ae24d1-registration-dir\") pod \"csi-hostpathplugin-n5dbh\" (UID: \"e7cca116-4f86-480b-9186-651912ae24d1\") " pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009655 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cl2q\" (UniqueName: \"kubernetes.io/projected/cedfc67a-fc52-4c43-8962-619b6e436d60-kube-api-access-4cl2q\") pod \"packageserver-d55dfcdfc-kqn62\" (UID: \"cedfc67a-fc52-4c43-8962-619b6e436d60\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009676 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qzkm\" (UniqueName: \"kubernetes.io/projected/cb671439-af50-4c0d-8f7c-d92b3571b2b0-kube-api-access-5qzkm\") pod \"collect-profiles-29530080-tpjsc\" (UID: \"cb671439-af50-4c0d-8f7c-d92b3571b2b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009693 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz5lw\" (UniqueName: \"kubernetes.io/projected/03337365-2181-4cf3-90d6-6664103f220b-kube-api-access-zz5lw\") pod \"dns-default-vr4sx\" (UID: \"03337365-2181-4cf3-90d6-6664103f220b\") " pod="openshift-dns/dns-default-vr4sx" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009710 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae22cab2-d791-4513-8794-e5d93b7447e5-client-ca\") pod \"route-controller-manager-6576b87f9c-zblkc\" (UID: \"ae22cab2-d791-4513-8794-e5d93b7447e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009730 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t882k\" (UID: \"f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t882k" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009779 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/64893275-e793-4afa-9736-e52ec6ecf447-node-bootstrap-token\") pod \"machine-config-server-dztnb\" (UID: \"64893275-e793-4afa-9736-e52ec6ecf447\") " pod="openshift-machine-config-operator/machine-config-server-dztnb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009802 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15299e81-114c-47b9-a589-1a8d78426736-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8gp45\" (UID: \"15299e81-114c-47b9-a589-1a8d78426736\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gp45" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009823 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03337365-2181-4cf3-90d6-6664103f220b-config-volume\") pod \"dns-default-vr4sx\" (UID: \"03337365-2181-4cf3-90d6-6664103f220b\") " pod="openshift-dns/dns-default-vr4sx" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009842 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q5w7\" (UniqueName: \"kubernetes.io/projected/bd9d22e8-4e02-41dd-926d-d95d15beceeb-kube-api-access-4q5w7\") pod \"kube-storage-version-migrator-operator-b67b599dd-r9sjq\" (UID: \"bd9d22e8-4e02-41dd-926d-d95d15beceeb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9sjq" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009857 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/486601c8-1d5e-4805-8ab4-d4a55de883f9-signing-cabundle\") pod \"service-ca-9c57cc56f-t92pb\" (UID: \"486601c8-1d5e-4805-8ab4-d4a55de883f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-t92pb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009878 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z98mq\" (UniqueName: \"kubernetes.io/projected/2e231e84-62b1-447f-b8bd-713b1027e045-kube-api-access-z98mq\") pod \"machine-config-operator-74547568cd-cb4fk\" (UID: \"2e231e84-62b1-447f-b8bd-713b1027e045\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009901 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqd6c\" (UniqueName: \"kubernetes.io/projected/486601c8-1d5e-4805-8ab4-d4a55de883f9-kube-api-access-mqd6c\") pod \"service-ca-9c57cc56f-t92pb\" (UID: \"486601c8-1d5e-4805-8ab4-d4a55de883f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-t92pb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009918 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e7cca116-4f86-480b-9186-651912ae24d1-mountpoint-dir\") pod \"csi-hostpathplugin-n5dbh\" (UID: \"e7cca116-4f86-480b-9186-651912ae24d1\") " pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009943 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsv4b\" (UniqueName: \"kubernetes.io/projected/69abc20a-54cb-47c6-884d-e12fd1984fdb-kube-api-access-jsv4b\") pod \"marketplace-operator-79b997595-pjdts\" (UID: \"69abc20a-54cb-47c6-884d-e12fd1984fdb\") " pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009966 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae22cab2-d791-4513-8794-e5d93b7447e5-serving-cert\") pod \"route-controller-manager-6576b87f9c-zblkc\" (UID: \"ae22cab2-d791-4513-8794-e5d93b7447e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009983 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/957f8efa-8db6-4dd9-969d-2ec80e8c6f7a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-t5h8l\" (UID: \"957f8efa-8db6-4dd9-969d-2ec80e8c6f7a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t5h8l" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.009997 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v87h\" (UniqueName: \"kubernetes.io/projected/2e02a9c4-892c-419b-af9c-c8afc2158051-kube-api-access-4v87h\") pod \"olm-operator-6b444d44fb-n5rdt\" (UID: \"2e02a9c4-892c-419b-af9c-c8afc2158051\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010001 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e7cca116-4f86-480b-9186-651912ae24d1-plugins-dir\") pod \"csi-hostpathplugin-n5dbh\" (UID: \"e7cca116-4f86-480b-9186-651912ae24d1\") " pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010051 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdftp\" (UniqueName: \"kubernetes.io/projected/28f92e4d-87a5-4cb4-9f42-1301b5d4fc31-kube-api-access-jdftp\") pod \"multus-admission-controller-857f4d67dd-9x56p\" (UID: \"28f92e4d-87a5-4cb4-9f42-1301b5d4fc31\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9x56p" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010072 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhxds\" (UniqueName: \"kubernetes.io/projected/0ff76a9d-67aa-4d7b-b21d-5df9ff4a03c2-kube-api-access-nhxds\") pod \"dns-operator-744455d44c-x9fkx\" (UID: \"0ff76a9d-67aa-4d7b-b21d-5df9ff4a03c2\") " pod="openshift-dns-operator/dns-operator-744455d44c-x9fkx" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010088 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6769w\" (UniqueName: \"kubernetes.io/projected/e7cca116-4f86-480b-9186-651912ae24d1-kube-api-access-6769w\") pod \"csi-hostpathplugin-n5dbh\" (UID: \"e7cca116-4f86-480b-9186-651912ae24d1\") " pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010116 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010142 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz558\" (UniqueName: \"kubernetes.io/projected/6b7576be-af0b-4553-bc96-125e87709ad1-kube-api-access-rz558\") pod \"control-plane-machine-set-operator-78cbb6b69f-pjp8z\" (UID: \"6b7576be-af0b-4553-bc96-125e87709ad1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjp8z" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010159 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419-config\") pod \"service-ca-operator-777779d784-mrwl8\" (UID: \"dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mrwl8" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010175 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b63716df-3a08-4a47-8bd6-64827459651e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dzswb\" (UID: \"b63716df-3a08-4a47-8bd6-64827459651e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dzswb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010192 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b87923c-05c9-40bd-a84f-6b6462883363-cert\") pod \"ingress-canary-hwdnz\" (UID: \"0b87923c-05c9-40bd-a84f-6b6462883363\") " pod="openshift-ingress-canary/ingress-canary-hwdnz" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010203 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e7cca116-4f86-480b-9186-651912ae24d1-plugins-dir\") pod \"csi-hostpathplugin-n5dbh\" (UID: \"e7cca116-4f86-480b-9186-651912ae24d1\") " pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010210 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz2hr\" (UniqueName: \"kubernetes.io/projected/b63716df-3a08-4a47-8bd6-64827459651e-kube-api-access-vz2hr\") pod \"machine-config-controller-84d6567774-dzswb\" (UID: \"b63716df-3a08-4a47-8bd6-64827459651e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dzswb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010238 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwk94\" (UniqueName: \"kubernetes.io/projected/2a9783d6-f83d-4e9c-bbe8-815b994fede2-kube-api-access-pwk94\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4d7k\" (UID: \"2a9783d6-f83d-4e9c-bbe8-815b994fede2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4d7k" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010253 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9d22e8-4e02-41dd-926d-d95d15beceeb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-r9sjq\" (UID: \"bd9d22e8-4e02-41dd-926d-d95d15beceeb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9sjq" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010269 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be696be4-2c84-4434-9d93-804c9bb6604b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vtp7m\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010318 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419-serving-cert\") pod \"service-ca-operator-777779d784-mrwl8\" (UID: \"dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mrwl8" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010338 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cedfc67a-fc52-4c43-8962-619b6e436d60-tmpfs\") pod \"packageserver-d55dfcdfc-kqn62\" (UID: \"cedfc67a-fc52-4c43-8962-619b6e436d60\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010367 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be696be4-2c84-4434-9d93-804c9bb6604b-serving-cert\") pod \"controller-manager-879f6c89f-vtp7m\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010393 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cedfc67a-fc52-4c43-8962-619b6e436d60-apiservice-cert\") pod \"packageserver-d55dfcdfc-kqn62\" (UID: \"cedfc67a-fc52-4c43-8962-619b6e436d60\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010409 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a9783d6-f83d-4e9c-bbe8-815b994fede2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4d7k\" (UID: \"2a9783d6-f83d-4e9c-bbe8-815b994fede2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4d7k" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010426 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/28f92e4d-87a5-4cb4-9f42-1301b5d4fc31-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9x56p\" (UID: \"28f92e4d-87a5-4cb4-9f42-1301b5d4fc31\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9x56p" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010443 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/957f8efa-8db6-4dd9-969d-2ec80e8c6f7a-config\") pod \"kube-controller-manager-operator-78b949d7b-t5h8l\" (UID: \"957f8efa-8db6-4dd9-969d-2ec80e8c6f7a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t5h8l" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010464 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15299e81-114c-47b9-a589-1a8d78426736-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8gp45\" (UID: \"15299e81-114c-47b9-a589-1a8d78426736\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gp45" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010486 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf77c\" (UniqueName: \"kubernetes.io/projected/be696be4-2c84-4434-9d93-804c9bb6604b-kube-api-access-qf77c\") pod \"controller-manager-879f6c89f-vtp7m\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010506 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be696be4-2c84-4434-9d93-804c9bb6604b-client-ca\") pod \"controller-manager-879f6c89f-vtp7m\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010522 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/957f8efa-8db6-4dd9-969d-2ec80e8c6f7a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-t5h8l\" (UID: \"957f8efa-8db6-4dd9-969d-2ec80e8c6f7a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t5h8l" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010538 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2e231e84-62b1-447f-b8bd-713b1027e045-images\") pod \"machine-config-operator-74547568cd-cb4fk\" (UID: \"2e231e84-62b1-447f-b8bd-713b1027e045\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010552 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69abc20a-54cb-47c6-884d-e12fd1984fdb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pjdts\" (UID: \"69abc20a-54cb-47c6-884d-e12fd1984fdb\") " pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010570 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2e231e84-62b1-447f-b8bd-713b1027e045-proxy-tls\") pod \"machine-config-operator-74547568cd-cb4fk\" (UID: \"2e231e84-62b1-447f-b8bd-713b1027e045\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk" Feb 23 00:09:06 crc kubenswrapper[4953]: E0223 00:09:06.010588 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:06.510580279 +0000 UTC m=+144.444422125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010607 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b63716df-3a08-4a47-8bd6-64827459651e-proxy-tls\") pod \"machine-config-controller-84d6567774-dzswb\" (UID: \"b63716df-3a08-4a47-8bd6-64827459651e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dzswb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010629 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb671439-af50-4c0d-8f7c-d92b3571b2b0-config-volume\") pod \"collect-profiles-29530080-tpjsc\" (UID: \"cb671439-af50-4c0d-8f7c-d92b3571b2b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010645 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/03337365-2181-4cf3-90d6-6664103f220b-metrics-tls\") pod \"dns-default-vr4sx\" (UID: \"03337365-2181-4cf3-90d6-6664103f220b\") " pod="openshift-dns/dns-default-vr4sx" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010652 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be696be4-2c84-4434-9d93-804c9bb6604b-config\") pod \"controller-manager-879f6c89f-vtp7m\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010665 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e7cca116-4f86-480b-9186-651912ae24d1-socket-dir\") pod \"csi-hostpathplugin-n5dbh\" (UID: \"e7cca116-4f86-480b-9186-651912ae24d1\") " pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010699 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b59cp\" (UniqueName: \"kubernetes.io/projected/64893275-e793-4afa-9736-e52ec6ecf447-kube-api-access-b59cp\") pod \"machine-config-server-dztnb\" (UID: \"64893275-e793-4afa-9736-e52ec6ecf447\") " pod="openshift-machine-config-operator/machine-config-server-dztnb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010702 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e7cca116-4f86-480b-9186-651912ae24d1-socket-dir\") pod \"csi-hostpathplugin-n5dbh\" (UID: \"e7cca116-4f86-480b-9186-651912ae24d1\") " pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.010757 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae22cab2-d791-4513-8794-e5d93b7447e5-config\") pod \"route-controller-manager-6576b87f9c-zblkc\" (UID: \"ae22cab2-d791-4513-8794-e5d93b7447e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.011107 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e7cca116-4f86-480b-9186-651912ae24d1-mountpoint-dir\") pod \"csi-hostpathplugin-n5dbh\" (UID: \"e7cca116-4f86-480b-9186-651912ae24d1\") " pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.011272 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419-config\") pod \"service-ca-operator-777779d784-mrwl8\" (UID: \"dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mrwl8" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.011275 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2e231e84-62b1-447f-b8bd-713b1027e045-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cb4fk\" (UID: \"2e231e84-62b1-447f-b8bd-713b1027e045\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.011389 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280-config\") pod \"kube-apiserver-operator-766d6c64bb-t882k\" (UID: \"f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t882k" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.012151 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b63716df-3a08-4a47-8bd6-64827459651e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dzswb\" (UID: \"b63716df-3a08-4a47-8bd6-64827459651e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dzswb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.012368 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15299e81-114c-47b9-a589-1a8d78426736-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8gp45\" (UID: \"15299e81-114c-47b9-a589-1a8d78426736\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gp45" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.012446 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e7cca116-4f86-480b-9186-651912ae24d1-csi-data-dir\") pod \"csi-hostpathplugin-n5dbh\" (UID: \"e7cca116-4f86-480b-9186-651912ae24d1\") " pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.016497 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e7cca116-4f86-480b-9186-651912ae24d1-registration-dir\") pod \"csi-hostpathplugin-n5dbh\" (UID: \"e7cca116-4f86-480b-9186-651912ae24d1\") " pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.020368 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.021106 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a9783d6-f83d-4e9c-bbe8-815b994fede2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4d7k\" (UID: \"2a9783d6-f83d-4e9c-bbe8-815b994fede2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4d7k" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.022516 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be696be4-2c84-4434-9d93-804c9bb6604b-client-ca\") pod \"controller-manager-879f6c89f-vtp7m\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.022542 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/957f8efa-8db6-4dd9-969d-2ec80e8c6f7a-config\") pod \"kube-controller-manager-operator-78b949d7b-t5h8l\" (UID: \"957f8efa-8db6-4dd9-969d-2ec80e8c6f7a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t5h8l" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.022793 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/64893275-e793-4afa-9736-e52ec6ecf447-certs\") pod \"machine-config-server-dztnb\" (UID: \"64893275-e793-4afa-9736-e52ec6ecf447\") " pod="openshift-machine-config-operator/machine-config-server-dztnb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.022809 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2e231e84-62b1-447f-b8bd-713b1027e045-proxy-tls\") pod \"machine-config-operator-74547568cd-cb4fk\" (UID: \"2e231e84-62b1-447f-b8bd-713b1027e045\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.022859 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be696be4-2c84-4434-9d93-804c9bb6604b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vtp7m\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.023180 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd9d22e8-4e02-41dd-926d-d95d15beceeb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-r9sjq\" (UID: \"bd9d22e8-4e02-41dd-926d-d95d15beceeb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9sjq" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.023223 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2e231e84-62b1-447f-b8bd-713b1027e045-images\") pod \"machine-config-operator-74547568cd-cb4fk\" (UID: \"2e231e84-62b1-447f-b8bd-713b1027e045\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.023830 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b63716df-3a08-4a47-8bd6-64827459651e-proxy-tls\") pod \"machine-config-controller-84d6567774-dzswb\" (UID: \"b63716df-3a08-4a47-8bd6-64827459651e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dzswb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.024100 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69abc20a-54cb-47c6-884d-e12fd1984fdb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pjdts\" (UID: \"69abc20a-54cb-47c6-884d-e12fd1984fdb\") " pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.024570 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cedfc67a-fc52-4c43-8962-619b6e436d60-apiservice-cert\") pod \"packageserver-d55dfcdfc-kqn62\" (UID: \"cedfc67a-fc52-4c43-8962-619b6e436d60\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.024833 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb671439-af50-4c0d-8f7c-d92b3571b2b0-config-volume\") pod \"collect-profiles-29530080-tpjsc\" (UID: \"cb671439-af50-4c0d-8f7c-d92b3571b2b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.025557 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae22cab2-d791-4513-8794-e5d93b7447e5-client-ca\") pod \"route-controller-manager-6576b87f9c-zblkc\" (UID: \"ae22cab2-d791-4513-8794-e5d93b7447e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.026063 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t882k\" (UID: \"f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t882k" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.026171 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/486601c8-1d5e-4805-8ab4-d4a55de883f9-signing-cabundle\") pod \"service-ca-9c57cc56f-t92pb\" (UID: \"486601c8-1d5e-4805-8ab4-d4a55de883f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-t92pb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.026250 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03337365-2181-4cf3-90d6-6664103f220b-config-volume\") pod \"dns-default-vr4sx\" (UID: \"03337365-2181-4cf3-90d6-6664103f220b\") " pod="openshift-dns/dns-default-vr4sx" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.026252 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.026419 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cedfc67a-fc52-4c43-8962-619b6e436d60-tmpfs\") pod \"packageserver-d55dfcdfc-kqn62\" (UID: \"cedfc67a-fc52-4c43-8962-619b6e436d60\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.026817 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/64893275-e793-4afa-9736-e52ec6ecf447-node-bootstrap-token\") pod \"machine-config-server-dztnb\" (UID: \"64893275-e793-4afa-9736-e52ec6ecf447\") " pod="openshift-machine-config-operator/machine-config-server-dztnb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.027238 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/28f92e4d-87a5-4cb4-9f42-1301b5d4fc31-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9x56p\" (UID: \"28f92e4d-87a5-4cb4-9f42-1301b5d4fc31\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9x56p" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.027344 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69abc20a-54cb-47c6-884d-e12fd1984fdb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pjdts\" (UID: \"69abc20a-54cb-47c6-884d-e12fd1984fdb\") " pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.027814 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/486601c8-1d5e-4805-8ab4-d4a55de883f9-signing-key\") pod \"service-ca-9c57cc56f-t92pb\" (UID: \"486601c8-1d5e-4805-8ab4-d4a55de883f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-t92pb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.027871 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b87923c-05c9-40bd-a84f-6b6462883363-cert\") pod \"ingress-canary-hwdnz\" (UID: \"0b87923c-05c9-40bd-a84f-6b6462883363\") " pod="openshift-ingress-canary/ingress-canary-hwdnz" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.030511 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b7576be-af0b-4553-bc96-125e87709ad1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pjp8z\" (UID: \"6b7576be-af0b-4553-bc96-125e87709ad1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjp8z" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.030695 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cedfc67a-fc52-4c43-8962-619b6e436d60-webhook-cert\") pod \"packageserver-d55dfcdfc-kqn62\" (UID: \"cedfc67a-fc52-4c43-8962-619b6e436d60\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.030850 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb671439-af50-4c0d-8f7c-d92b3571b2b0-secret-volume\") pod \"collect-profiles-29530080-tpjsc\" (UID: \"cb671439-af50-4c0d-8f7c-d92b3571b2b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.031882 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.034962 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ff76a9d-67aa-4d7b-b21d-5df9ff4a03c2-metrics-tls\") pod \"dns-operator-744455d44c-x9fkx\" (UID: \"0ff76a9d-67aa-4d7b-b21d-5df9ff4a03c2\") " pod="openshift-dns-operator/dns-operator-744455d44c-x9fkx" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.036137 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15299e81-114c-47b9-a589-1a8d78426736-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8gp45\" (UID: \"15299e81-114c-47b9-a589-1a8d78426736\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gp45" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.036477 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419-serving-cert\") pod \"service-ca-operator-777779d784-mrwl8\" (UID: \"dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mrwl8" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.036650 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be696be4-2c84-4434-9d93-804c9bb6604b-serving-cert\") pod \"controller-manager-879f6c89f-vtp7m\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.037314 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/03337365-2181-4cf3-90d6-6664103f220b-metrics-tls\") pod \"dns-default-vr4sx\" (UID: \"03337365-2181-4cf3-90d6-6664103f220b\") " pod="openshift-dns/dns-default-vr4sx" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.038896 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/957f8efa-8db6-4dd9-969d-2ec80e8c6f7a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-t5h8l\" (UID: \"957f8efa-8db6-4dd9-969d-2ec80e8c6f7a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t5h8l" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.039549 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a9783d6-f83d-4e9c-bbe8-815b994fede2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4d7k\" (UID: \"2a9783d6-f83d-4e9c-bbe8-815b994fede2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4d7k" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.039778 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae22cab2-d791-4513-8794-e5d93b7447e5-serving-cert\") pod \"route-controller-manager-6576b87f9c-zblkc\" (UID: \"ae22cab2-d791-4513-8794-e5d93b7447e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" Feb 23 00:09:06 crc kubenswrapper[4953]: W0223 00:09:06.042215 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf725846e_4a85_49f1_9f43_bf53a4f066db.slice/crio-c1d95e6a22194087e80506e4de865b33ad597664bde79170afb8f4540566c192 WatchSource:0}: Error finding container c1d95e6a22194087e80506e4de865b33ad597664bde79170afb8f4540566c192: Status 404 returned error can't find the container with id c1d95e6a22194087e80506e4de865b33ad597664bde79170afb8f4540566c192 Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.045057 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7"] Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.056495 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d85cm\" (UniqueName: \"kubernetes.io/projected/dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419-kube-api-access-d85cm\") pod \"service-ca-operator-777779d784-mrwl8\" (UID: \"dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mrwl8" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.066969 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz2hr\" (UniqueName: \"kubernetes.io/projected/b63716df-3a08-4a47-8bd6-64827459651e-kube-api-access-vz2hr\") pod \"machine-config-controller-84d6567774-dzswb\" (UID: \"b63716df-3a08-4a47-8bd6-64827459651e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dzswb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.067484 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.074642 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.088464 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.090664 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.096318 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rzc24"] Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.097050 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdftp\" (UniqueName: \"kubernetes.io/projected/28f92e4d-87a5-4cb4-9f42-1301b5d4fc31-kube-api-access-jdftp\") pod \"multus-admission-controller-857f4d67dd-9x56p\" (UID: \"28f92e4d-87a5-4cb4-9f42-1301b5d4fc31\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9x56p" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.106121 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhxds\" (UniqueName: \"kubernetes.io/projected/0ff76a9d-67aa-4d7b-b21d-5df9ff4a03c2-kube-api-access-nhxds\") pod \"dns-operator-744455d44c-x9fkx\" (UID: \"0ff76a9d-67aa-4d7b-b21d-5df9ff4a03c2\") " pod="openshift-dns-operator/dns-operator-744455d44c-x9fkx" Feb 23 00:09:06 crc kubenswrapper[4953]: W0223 00:09:06.109006 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b0919d1_104f_4a33_b5df_d604cfa672f2.slice/crio-f8924524f3799d4bd62ec58ed5b107b9a6383ca1ecbdf87a987aba239370e88e WatchSource:0}: Error finding container f8924524f3799d4bd62ec58ed5b107b9a6383ca1ecbdf87a987aba239370e88e: Status 404 returned error can't find the container with id f8924524f3799d4bd62ec58ed5b107b9a6383ca1ecbdf87a987aba239370e88e Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.109226 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f62s6" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.112169 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:06 crc kubenswrapper[4953]: E0223 00:09:06.112504 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:06.612419184 +0000 UTC m=+144.546261030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.112689 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-flc99" event={"ID":"f6643e75-78f7-40fc-b597-d37fa9381727","Type":"ContainerStarted","Data":"ffddd648a238c5c1a94f6cebc063358e0121e9f1c88a7841a34d096ddfafb266"} Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.113012 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:06 crc kubenswrapper[4953]: E0223 00:09:06.113329 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:06.613321038 +0000 UTC m=+144.547162884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.116973 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v2lhk" event={"ID":"d5581f85-7931-4952-9fb8-4829ef46865a","Type":"ContainerStarted","Data":"302e796f4858e365d9ad2b6032f04cfbedb83c3f21fb9f7645ad730bf98854d3"} Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.117016 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v2lhk" event={"ID":"d5581f85-7931-4952-9fb8-4829ef46865a","Type":"ContainerStarted","Data":"d77e9d9a575d6494eecd287f9c0c2c13a13f11b47800aad385c6464d85bc2fe3"} Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.120474 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9x56p" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.123001 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-dr5n9" event={"ID":"f725846e-4a85-49f1-9f43-bf53a4f066db","Type":"ContainerStarted","Data":"c1d95e6a22194087e80506e4de865b33ad597664bde79170afb8f4540566c192"} Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.128801 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6769w\" (UniqueName: \"kubernetes.io/projected/e7cca116-4f86-480b-9186-651912ae24d1-kube-api-access-6769w\") pod \"csi-hostpathplugin-n5dbh\" (UID: \"e7cca116-4f86-480b-9186-651912ae24d1\") " pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.170148 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz558\" (UniqueName: \"kubernetes.io/projected/6b7576be-af0b-4553-bc96-125e87709ad1-kube-api-access-rz558\") pod \"control-plane-machine-set-operator-78cbb6b69f-pjp8z\" (UID: \"6b7576be-af0b-4553-bc96-125e87709ad1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjp8z" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.170900 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26"] Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.193321 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b59cp\" (UniqueName: \"kubernetes.io/projected/64893275-e793-4afa-9736-e52ec6ecf447-kube-api-access-b59cp\") pod \"machine-config-server-dztnb\" (UID: \"64893275-e793-4afa-9736-e52ec6ecf447\") " pod="openshift-machine-config-operator/machine-config-server-dztnb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.198918 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gmlm\" (UniqueName: \"kubernetes.io/projected/ae22cab2-d791-4513-8794-e5d93b7447e5-kube-api-access-4gmlm\") pod \"route-controller-manager-6576b87f9c-zblkc\" (UID: \"ae22cab2-d791-4513-8794-e5d93b7447e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.204141 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dzswb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.223690 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mrwl8" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.223760 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:06 crc kubenswrapper[4953]: E0223 00:09:06.223973 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:06.72391839 +0000 UTC m=+144.657760236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.224216 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:06 crc kubenswrapper[4953]: E0223 00:09:06.227266 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:06.727248661 +0000 UTC m=+144.661090507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.228617 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl"] Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.229326 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzj2k\" (UniqueName: \"kubernetes.io/projected/0b87923c-05c9-40bd-a84f-6b6462883363-kube-api-access-qzj2k\") pod \"ingress-canary-hwdnz\" (UID: \"0b87923c-05c9-40bd-a84f-6b6462883363\") " pod="openshift-ingress-canary/ingress-canary-hwdnz" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.279684 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-x9fkx" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.281496 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvm94\" (UniqueName: \"kubernetes.io/projected/ef7c633a-9dcf-4613-9106-3c1f07a6afab-kube-api-access-zvm94\") pod \"downloads-7954f5f757-blqz2\" (UID: \"ef7c633a-9dcf-4613-9106-3c1f07a6afab\") " pod="openshift-console/downloads-7954f5f757-blqz2" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.283857 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dztnb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.291042 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.295040 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hwdnz" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.301590 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15299e81-114c-47b9-a589-1a8d78426736-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8gp45\" (UID: \"15299e81-114c-47b9-a589-1a8d78426736\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gp45" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.304853 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf77c\" (UniqueName: \"kubernetes.io/projected/be696be4-2c84-4434-9d93-804c9bb6604b-kube-api-access-qf77c\") pod \"controller-manager-879f6c89f-vtp7m\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.306474 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsv4b\" (UniqueName: \"kubernetes.io/projected/69abc20a-54cb-47c6-884d-e12fd1984fdb-kube-api-access-jsv4b\") pod \"marketplace-operator-79b997595-pjdts\" (UID: \"69abc20a-54cb-47c6-884d-e12fd1984fdb\") " pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.309217 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qzkm\" (UniqueName: \"kubernetes.io/projected/cb671439-af50-4c0d-8f7c-d92b3571b2b0-kube-api-access-5qzkm\") pod \"collect-profiles-29530080-tpjsc\" (UID: \"cb671439-af50-4c0d-8f7c-d92b3571b2b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.327982 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:06 crc kubenswrapper[4953]: E0223 00:09:06.328385 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:06.828370246 +0000 UTC m=+144.762212092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.328655 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cl2q\" (UniqueName: \"kubernetes.io/projected/cedfc67a-fc52-4c43-8962-619b6e436d60-kube-api-access-4cl2q\") pod \"packageserver-d55dfcdfc-kqn62\" (UID: \"cedfc67a-fc52-4c43-8962-619b6e436d60\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.350036 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz5lw\" (UniqueName: \"kubernetes.io/projected/03337365-2181-4cf3-90d6-6664103f220b-kube-api-access-zz5lw\") pod \"dns-default-vr4sx\" (UID: \"03337365-2181-4cf3-90d6-6664103f220b\") " pod="openshift-dns/dns-default-vr4sx" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.378996 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t882k\" (UID: \"f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t882k" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.402307 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z98mq\" (UniqueName: \"kubernetes.io/projected/2e231e84-62b1-447f-b8bd-713b1027e045-kube-api-access-z98mq\") pod \"machine-config-operator-74547568cd-cb4fk\" (UID: \"2e231e84-62b1-447f-b8bd-713b1027e045\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.412156 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqd6c\" (UniqueName: \"kubernetes.io/projected/486601c8-1d5e-4805-8ab4-d4a55de883f9-kube-api-access-mqd6c\") pod \"service-ca-9c57cc56f-t92pb\" (UID: \"486601c8-1d5e-4805-8ab4-d4a55de883f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-t92pb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.421257 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjp8z" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.429236 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.429460 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:06 crc kubenswrapper[4953]: E0223 00:09:06.429782 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:06.929770778 +0000 UTC m=+144.863612624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.436633 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gp45" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.453523 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t882k" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.457108 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q5w7\" (UniqueName: \"kubernetes.io/projected/bd9d22e8-4e02-41dd-926d-d95d15beceeb-kube-api-access-4q5w7\") pod \"kube-storage-version-migrator-operator-b67b599dd-r9sjq\" (UID: \"bd9d22e8-4e02-41dd-926d-d95d15beceeb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9sjq" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.463858 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwk94\" (UniqueName: \"kubernetes.io/projected/2a9783d6-f83d-4e9c-bbe8-815b994fede2-kube-api-access-pwk94\") pod \"openshift-controller-manager-operator-756b6f6bc6-q4d7k\" (UID: \"2a9783d6-f83d-4e9c-bbe8-815b994fede2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4d7k" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.478009 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-blqz2" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.478812 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/957f8efa-8db6-4dd9-969d-2ec80e8c6f7a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-t5h8l\" (UID: \"957f8efa-8db6-4dd9-969d-2ec80e8c6f7a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t5h8l" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.493680 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4d7k" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.498214 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc" Feb 23 00:09:06 crc kubenswrapper[4953]: W0223 00:09:06.506260 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64893275_e793_4afa_9736_e52ec6ecf447.slice/crio-d50b914b2a233d7f25d969da7149b8f41fc9956f23f0333b7e6953e49b834ebd WatchSource:0}: Error finding container d50b914b2a233d7f25d969da7149b8f41fc9956f23f0333b7e6953e49b834ebd: Status 404 returned error can't find the container with id d50b914b2a233d7f25d969da7149b8f41fc9956f23f0333b7e6953e49b834ebd Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.510818 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.524707 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-t92pb" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.530999 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:06 crc kubenswrapper[4953]: E0223 00:09:06.531215 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:07.031188811 +0000 UTC m=+144.965030667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.531328 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:06 crc kubenswrapper[4953]: E0223 00:09:06.531937 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:07.031922861 +0000 UTC m=+144.965764717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.534074 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.541763 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9sjq" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.548588 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.558058 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.568790 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vr4sx" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.632981 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:06 crc kubenswrapper[4953]: E0223 00:09:06.633410 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:07.133394556 +0000 UTC m=+145.067236402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.734157 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:06 crc kubenswrapper[4953]: E0223 00:09:06.734738 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:07.234725856 +0000 UTC m=+145.168567702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.773192 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t5h8l" Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.850326 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:06 crc kubenswrapper[4953]: E0223 00:09:06.850476 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:07.350450708 +0000 UTC m=+145.284292554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.850576 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:06 crc kubenswrapper[4953]: E0223 00:09:06.851849 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:07.351836595 +0000 UTC m=+145.285678431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.952993 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:06 crc kubenswrapper[4953]: E0223 00:09:06.953133 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:07.453102324 +0000 UTC m=+145.386944180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.953420 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:06 crc kubenswrapper[4953]: E0223 00:09:06.953912 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:07.453900726 +0000 UTC m=+145.387742572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:06 crc kubenswrapper[4953]: I0223 00:09:06.968592 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdkxk"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.054134 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:07 crc kubenswrapper[4953]: E0223 00:09:07.054489 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:07.554473896 +0000 UTC m=+145.488315742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.054571 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:07 crc kubenswrapper[4953]: E0223 00:09:07.054952 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:07.554941899 +0000 UTC m=+145.488783745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.137198 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl" event={"ID":"8edda615-0162-4066-98cc-3dd760855917","Type":"ContainerStarted","Data":"98879c07c170bd1a00f0404c293ac440b66f4df5340d7e564175690b31ab56de"} Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.144416 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-flc99" event={"ID":"f6643e75-78f7-40fc-b597-d37fa9381727","Type":"ContainerStarted","Data":"32d456b875105b5fbc57e0a04fc341504d85d9569c35554cc0f4c9e2b6b157c7"} Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.149918 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v2lhk" event={"ID":"d5581f85-7931-4952-9fb8-4829ef46865a","Type":"ContainerStarted","Data":"f2fdf6fc46ff7a5e7dd101fa6d8826c9a7bd5c0e26672aef78a0c55c349e2a92"} Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.152835 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdkxk" event={"ID":"60b432c5-5097-40f3-983b-3a2355744ee3","Type":"ContainerStarted","Data":"e146d5ecf1b970863bdc4f990e266bf21d5ece0460587eb2bfa8f0783fa9f928"} Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.155862 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:07 crc kubenswrapper[4953]: E0223 00:09:07.156220 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:07.656205567 +0000 UTC m=+145.590047413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.158399 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dztnb" event={"ID":"64893275-e793-4afa-9736-e52ec6ecf447","Type":"ContainerStarted","Data":"d50b914b2a233d7f25d969da7149b8f41fc9956f23f0333b7e6953e49b834ebd"} Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.173260 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7" event={"ID":"1b0919d1-104f-4a33-b5df-d604cfa672f2","Type":"ContainerStarted","Data":"f8924524f3799d4bd62ec58ed5b107b9a6383ca1ecbdf87a987aba239370e88e"} Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.177693 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-p96zz" event={"ID":"045ff803-aa45-4faa-b4ee-7f0de4093f04","Type":"ContainerStarted","Data":"bc868dfafed6fac85f58389120d0761751d488b8545acbe2a456e760bd2d7103"} Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.177868 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sb7m5"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.178977 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rzc24" event={"ID":"a8af278f-dfdc-45c1-84da-df3c70951061","Type":"ContainerStarted","Data":"2f0cec5a76b05c834d468bcb09b1ff790c8d7e26a06561b062ffb56412bff920"} Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.181811 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" event={"ID":"7402d900-4b66-4c9d-8904-b35c3d4b06c7","Type":"ContainerStarted","Data":"6a940de842ea4562d7122f3224386d37109a05692e6537c6b2f776030874e9c1"} Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.189091 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xd97t"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.209209 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29530080-nw6hr"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.257919 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:07 crc kubenswrapper[4953]: E0223 00:09:07.259496 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:07.759479011 +0000 UTC m=+145.693320857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:07 crc kubenswrapper[4953]: W0223 00:09:07.288338 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc90b2280_0314_4b8a_979f_d678ee9a4a98.slice/crio-5dfb7cbae51a698008d4e4cdaca99e67c6a87a12cbbbe3a8fa1359320daf2bf6 WatchSource:0}: Error finding container 5dfb7cbae51a698008d4e4cdaca99e67c6a87a12cbbbe3a8fa1359320daf2bf6: Status 404 returned error can't find the container with id 5dfb7cbae51a698008d4e4cdaca99e67c6a87a12cbbbe3a8fa1359320daf2bf6 Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.360429 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:07 crc kubenswrapper[4953]: E0223 00:09:07.360575 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:07.860547164 +0000 UTC m=+145.794389010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.361332 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:07 crc kubenswrapper[4953]: E0223 00:09:07.362242 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:07.862205259 +0000 UTC m=+145.796047105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.383572 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gdxz7"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.395471 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sbhgq"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.397221 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-szx7x"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.413170 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-f6ptk"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.437528 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mrwl8"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.445379 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.466423 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:07 crc kubenswrapper[4953]: E0223 00:09:07.467122 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:07.967105547 +0000 UTC m=+145.900947393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.469367 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.473729 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-f62s6"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.475618 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.480497 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9x56p"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.511326 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hwdnz"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.567800 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:07 crc kubenswrapper[4953]: E0223 00:09:07.568105 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:08.068093368 +0000 UTC m=+146.001935214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.669089 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:07 crc kubenswrapper[4953]: E0223 00:09:07.669193 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:08.169171572 +0000 UTC m=+146.103013418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.670502 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:07 crc kubenswrapper[4953]: E0223 00:09:07.670798 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:08.170788786 +0000 UTC m=+146.104630632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.705157 4953 csr.go:261] certificate signing request csr-zxp7l is approved, waiting to be issued Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.711196 4953 csr.go:257] certificate signing request csr-zxp7l is issued Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.781625 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:07 crc kubenswrapper[4953]: E0223 00:09:07.783572 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:08.283553237 +0000 UTC m=+146.217395083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.815891 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gp45"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.863334 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.880967 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjp8z"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.885460 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pjdts"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.887229 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:07 crc kubenswrapper[4953]: E0223 00:09:07.887639 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:08.387624852 +0000 UTC m=+146.321466698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.889912 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4d7k"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.893699 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n5dbh"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.896620 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t92pb"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.900590 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-x9fkx"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.912702 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.914509 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t882k"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.916143 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vtp7m"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.917410 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vr4sx"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.919822 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.927418 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-blqz2"] Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.928778 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dzswb"] Feb 23 00:09:07 crc kubenswrapper[4953]: W0223 00:09:07.937558 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69abc20a_54cb_47c6_884d_e12fd1984fdb.slice/crio-a3c2d3de6b5af9c7af76944d6b004c0d4ce3610f1ad0f6740681b25e530674f2 WatchSource:0}: Error finding container a3c2d3de6b5af9c7af76944d6b004c0d4ce3610f1ad0f6740681b25e530674f2: Status 404 returned error can't find the container with id a3c2d3de6b5af9c7af76944d6b004c0d4ce3610f1ad0f6740681b25e530674f2 Feb 23 00:09:07 crc kubenswrapper[4953]: W0223 00:09:07.942794 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a9783d6_f83d_4e9c_bbe8_815b994fede2.slice/crio-2f514fe6f2ce9326a565bcdd102da15289653ca76870a3d1c674f249879d69e0 WatchSource:0}: Error finding container 2f514fe6f2ce9326a565bcdd102da15289653ca76870a3d1c674f249879d69e0: Status 404 returned error can't find the container with id 2f514fe6f2ce9326a565bcdd102da15289653ca76870a3d1c674f249879d69e0 Feb 23 00:09:07 crc kubenswrapper[4953]: W0223 00:09:07.955695 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03337365_2181_4cf3_90d6_6664103f220b.slice/crio-885eaf40ad3aa0495767a3058b24989878308724d7af81842b432b1740e50c21 WatchSource:0}: Error finding container 885eaf40ad3aa0495767a3058b24989878308724d7af81842b432b1740e50c21: Status 404 returned error can't find the container with id 885eaf40ad3aa0495767a3058b24989878308724d7af81842b432b1740e50c21 Feb 23 00:09:07 crc kubenswrapper[4953]: W0223 00:09:07.960998 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9fe5f7a_039b_4b0d_9f6e_21ccef0cd280.slice/crio-bb402441b98967f53217546ade0b137485336323d0cbfeab1affb13dc42a968f WatchSource:0}: Error finding container bb402441b98967f53217546ade0b137485336323d0cbfeab1affb13dc42a968f: Status 404 returned error can't find the container with id bb402441b98967f53217546ade0b137485336323d0cbfeab1affb13dc42a968f Feb 23 00:09:07 crc kubenswrapper[4953]: I0223 00:09:07.989198 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:07 crc kubenswrapper[4953]: E0223 00:09:07.989581 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:08.489565799 +0000 UTC m=+146.423407645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.026810 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t5h8l"] Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.062038 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk"] Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.090910 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:08 crc kubenswrapper[4953]: E0223 00:09:08.091640 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:08.5916274 +0000 UTC m=+146.525469246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.091974 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9sjq"] Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.092556 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v2lhk" podStartSLOduration=123.092536415 podStartE2EDuration="2m3.092536415s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:08.087780975 +0000 UTC m=+146.021622821" watchObservedRunningTime="2026-02-23 00:09:08.092536415 +0000 UTC m=+146.026378261" Feb 23 00:09:08 crc kubenswrapper[4953]: W0223 00:09:08.183077 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e231e84_62b1_447f_b8bd_713b1027e045.slice/crio-01191bdb4c16d3f194c32e742f845d3561489728a59efbdd76c58606c6075aba WatchSource:0}: Error finding container 01191bdb4c16d3f194c32e742f845d3561489728a59efbdd76c58606c6075aba: Status 404 returned error can't find the container with id 01191bdb4c16d3f194c32e742f845d3561489728a59efbdd76c58606c6075aba Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.190837 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc" event={"ID":"cb671439-af50-4c0d-8f7c-d92b3571b2b0","Type":"ContainerStarted","Data":"8b0a104ad669a7cdf0d9725406e4f0996a98f3b5ddbde5e6cc61dc34263ce660"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.192069 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.192332 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gp45" event={"ID":"15299e81-114c-47b9-a589-1a8d78426736","Type":"ContainerStarted","Data":"b5c2f0c274f963b9f511236d7f0502777bc48f02577cedf57099321dd03aa6ac"} Feb 23 00:09:08 crc kubenswrapper[4953]: E0223 00:09:08.192442 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:08.692427436 +0000 UTC m=+146.626269282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.192546 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:08 crc kubenswrapper[4953]: E0223 00:09:08.192821 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:08.692813407 +0000 UTC m=+146.626655253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.195877 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hwdnz" event={"ID":"0b87923c-05c9-40bd-a84f-6b6462883363","Type":"ContainerStarted","Data":"c6317cd74ce2c393f7994ba3dc95d65c9e84dc578b714e0b8f5e980dde8cffb6"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.195930 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hwdnz" event={"ID":"0b87923c-05c9-40bd-a84f-6b6462883363","Type":"ContainerStarted","Data":"d53da16aa01feb39a43d10bd106caccb55614b140a500017ea0a126a005c3681"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.200093 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" event={"ID":"26de90f6-24f2-4b9a-b60e-8dcc999890e8","Type":"ContainerStarted","Data":"672ae4b197acdfbc2e251cb997cad16b15f1992a34f5059dda70632211c6b46a"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.200123 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" event={"ID":"26de90f6-24f2-4b9a-b60e-8dcc999890e8","Type":"ContainerStarted","Data":"dffb3f11643befbb93056f743580f69cc0efdaf04fea40730f528c65a2e5e9cb"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.201824 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f62s6" event={"ID":"ddd17504-ce95-4253-8c88-1f5cf50f9184","Type":"ContainerStarted","Data":"33b63522b9469770da36b555d1e2efcd68e665d1c8ee7ae53302a9cb68300fc2"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.201849 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f62s6" event={"ID":"ddd17504-ce95-4253-8c88-1f5cf50f9184","Type":"ContainerStarted","Data":"86cd32e27a2148a120c4f09ad735cc2edb9841d59d87bb4f90cd6ca7a711f263"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.209355 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29530080-nw6hr" event={"ID":"2c463426-aee8-41c2-8f08-e553efa4742a","Type":"ContainerStarted","Data":"399ed9bd77b6f92a7de534442e0d4cf1536d9369a003293a9ab2154aeaf5803e"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.209468 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29530080-nw6hr" event={"ID":"2c463426-aee8-41c2-8f08-e553efa4742a","Type":"ContainerStarted","Data":"b912855d21e7aaf95e1c76286411647c1184d2904ec6cdaa385e306b37fc8c89"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.213224 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dzswb" event={"ID":"b63716df-3a08-4a47-8bd6-64827459651e","Type":"ContainerStarted","Data":"30e9202a3830a4c5c5ce86ec089a001bd965c133de9dc69a5ed0431acf42c1a3"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.214274 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjp8z" event={"ID":"6b7576be-af0b-4553-bc96-125e87709ad1","Type":"ContainerStarted","Data":"3b4334e5e91a3430e51fe02ab51e17923faa3f2a8d9c8eed1b8f7a39b8b1fce0"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.228010 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-p96zz" event={"ID":"045ff803-aa45-4faa-b4ee-7f0de4093f04","Type":"ContainerStarted","Data":"2fcc71c27e3dc734f5bc700719d1583575aacb619881e33e418bca7800ca6b4e"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.232730 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dztnb" event={"ID":"64893275-e793-4afa-9736-e52ec6ecf447","Type":"ContainerStarted","Data":"c78097292a7a18f19fafcc489bd593ba61ec7957e1ee7752f5e233cb6877203b"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.258940 4953 generic.go:334] "Generic (PLEG): container finished" podID="7402d900-4b66-4c9d-8904-b35c3d4b06c7" containerID="00bd34bd62f5b4c232f6eedd1ff361f42f25377fcfd7ff561ffe9b5099f7af09" exitCode=0 Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.259627 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" event={"ID":"7402d900-4b66-4c9d-8904-b35c3d4b06c7","Type":"ContainerDied","Data":"00bd34bd62f5b4c232f6eedd1ff361f42f25377fcfd7ff561ffe9b5099f7af09"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.261764 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-x9fkx" event={"ID":"0ff76a9d-67aa-4d7b-b21d-5df9ff4a03c2","Type":"ContainerStarted","Data":"c17067266782419ba8e2d7170d43d5df7bbf106eec7a7d65b15bf4b9d21a5e94"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.263742 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" event={"ID":"ae22cab2-d791-4513-8794-e5d93b7447e5","Type":"ContainerStarted","Data":"7327d209d6df8a649fbf876cb3105de94243316184ad78871ce94c188315f975"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.269200 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" event={"ID":"bb747d6b-337e-4040-948b-88b262efd03b","Type":"ContainerStarted","Data":"9dd27b1b949dae1356125fd88b15785facb0a64ebef912ac4b4ef8a9d1c6f7ad"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.269234 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" event={"ID":"bb747d6b-337e-4040-948b-88b262efd03b","Type":"ContainerStarted","Data":"994b958877a117f87937577ed2a095d8dd0ad7a3ed7aa668579692d033637061"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.273121 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdkxk" event={"ID":"60b432c5-5097-40f3-983b-3a2355744ee3","Type":"ContainerStarted","Data":"ec1f27e8d7ef388900716e793fe7b76a0fd4da59a5925ebdea14d2dc2dcccc22"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.293313 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:08 crc kubenswrapper[4953]: E0223 00:09:08.294059 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:08.794034774 +0000 UTC m=+146.727876620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.295714 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:08 crc kubenswrapper[4953]: E0223 00:09:08.296054 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:08.796043329 +0000 UTC m=+146.729885175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.301991 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg" event={"ID":"87070b2f-f30b-445a-8134-6708a6e6790e","Type":"ContainerStarted","Data":"435343cefa0c9f776d2296298a3e902fa9869f80fc12116b852359a9128789e5"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.302032 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg" event={"ID":"87070b2f-f30b-445a-8134-6708a6e6790e","Type":"ContainerStarted","Data":"5a57d2450703477519b4002ae3533e92a1cbe33dfa7271a53274bf2176e41232"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.302232 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg" Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.303667 4953 generic.go:334] "Generic (PLEG): container finished" podID="0f48e137-1959-43ff-8386-e7560140f2d4" containerID="04259464ae788f8ef6c6b55009e29c461f5c7bbd4673f1c14f0e2df0613c1276" exitCode=0 Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.303760 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" event={"ID":"0f48e137-1959-43ff-8386-e7560140f2d4","Type":"ContainerDied","Data":"04259464ae788f8ef6c6b55009e29c461f5c7bbd4673f1c14f0e2df0613c1276"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.303795 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" event={"ID":"0f48e137-1959-43ff-8386-e7560140f2d4","Type":"ContainerStarted","Data":"e0deb468b16db31c5354febdbcce7088521e2d9eeeda76e5f0b70b2327c2039a"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.314378 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl" event={"ID":"8edda615-0162-4066-98cc-3dd760855917","Type":"ContainerStarted","Data":"db97e10ad337f9c6e302c1ea986306d41933b6052bda7612e9fdea5b84b8a447"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.319231 4953 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9gcsg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.319305 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg" podUID="87070b2f-f30b-445a-8134-6708a6e6790e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.326308 4953 generic.go:334] "Generic (PLEG): container finished" podID="1b0919d1-104f-4a33-b5df-d604cfa672f2" containerID="c7767332d9a5731bc6577b7f251f38925ad0cf6ce9304d258623720e86dc5204" exitCode=0 Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.326470 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7" event={"ID":"1b0919d1-104f-4a33-b5df-d604cfa672f2","Type":"ContainerDied","Data":"c7767332d9a5731bc6577b7f251f38925ad0cf6ce9304d258623720e86dc5204"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.341033 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4d7k" event={"ID":"2a9783d6-f83d-4e9c-bbe8-815b994fede2","Type":"ContainerStarted","Data":"2f514fe6f2ce9326a565bcdd102da15289653ca76870a3d1c674f249879d69e0"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.353192 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sbhgq" event={"ID":"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b","Type":"ContainerStarted","Data":"d86621986c54eb0f4ca67ae5f6bede3c30830317b3b03b7422952607f1ba0a8a"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.353240 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sbhgq" event={"ID":"4d7fd8ab-12ef-4686-887c-3f4acbb5a30b","Type":"ContainerStarted","Data":"29f39058b4e5157a5dc90f514052b61e813db8c84ae97d2cc34fc0f0e130f62d"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.356456 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" event={"ID":"69abc20a-54cb-47c6-884d-e12fd1984fdb","Type":"ContainerStarted","Data":"a3c2d3de6b5af9c7af76944d6b004c0d4ce3610f1ad0f6740681b25e530674f2"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.370062 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" event={"ID":"be696be4-2c84-4434-9d93-804c9bb6604b","Type":"ContainerStarted","Data":"093f25019f69e073392e5bc93fca27fab07277a38c89d7577aeff999706e816f"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.384648 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vr4sx" event={"ID":"03337365-2181-4cf3-90d6-6664103f220b","Type":"ContainerStarted","Data":"885eaf40ad3aa0495767a3058b24989878308724d7af81842b432b1740e50c21"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.397835 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:08 crc kubenswrapper[4953]: E0223 00:09:08.399102 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:08.899084896 +0000 UTC m=+146.832926742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.409123 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" event={"ID":"cedfc67a-fc52-4c43-8962-619b6e436d60","Type":"ContainerStarted","Data":"0069a2f710f700b45cdb14b79b353ec07851bdc84c536c22b7890cc66067530f"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.412926 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9x56p" event={"ID":"28f92e4d-87a5-4cb4-9f42-1301b5d4fc31","Type":"ContainerStarted","Data":"d25190b587c395d0f236d41e80db934efe78451870c948379b3a24f09f5a6256"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.440244 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gdxz7" event={"ID":"b88234f7-8355-4c3b-a5f3-6195bbe46bee","Type":"ContainerStarted","Data":"a10f0d4c6a7f74adf0ad318d497a11dbd1355ee623cfe3dd5f1d8b054da09876"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.440309 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gdxz7" event={"ID":"b88234f7-8355-4c3b-a5f3-6195bbe46bee","Type":"ContainerStarted","Data":"a12ba796a335b6eeb24f27138cd517145cdbdfc51791d214e097ea53878eb637"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.459093 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rzc24" event={"ID":"a8af278f-dfdc-45c1-84da-df3c70951061","Type":"ContainerStarted","Data":"a5f468098bfdf186285e5a15415f39cd25e5fde4b9317b2ff805b1bc95d0854a"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.459506 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rzc24" event={"ID":"a8af278f-dfdc-45c1-84da-df3c70951061","Type":"ContainerStarted","Data":"eb016eb8b5709c3969c8c911269d0386a30f4a04bca07d21c4b3e9a22b44245c"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.460128 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rzc24" Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.469045 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt" event={"ID":"2e02a9c4-892c-419b-af9c-c8afc2158051","Type":"ContainerStarted","Data":"6c86a6a4ad0d00fc0e73d7937977cbb4a4eb5b336739919e16ebb3e49c2d7a0c"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.469077 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt" event={"ID":"2e02a9c4-892c-419b-af9c-c8afc2158051","Type":"ContainerStarted","Data":"9898e5d97b53585e470d9f83b05685765326e1b32ee92d472880c0a1d3d7657c"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.469704 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt" Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.472941 4953 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-n5rdt container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.472994 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt" podUID="2e02a9c4-892c-419b-af9c-c8afc2158051" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.475865 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" event={"ID":"c90b2280-0314-4b8a-979f-d678ee9a4a98","Type":"ContainerStarted","Data":"b0e05d6d031ba8d913cbfa5746fff8f9f43db64b9a742d3863072a48de4b0741"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.475890 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" event={"ID":"c90b2280-0314-4b8a-979f-d678ee9a4a98","Type":"ContainerStarted","Data":"5dfb7cbae51a698008d4e4cdaca99e67c6a87a12cbbbe3a8fa1359320daf2bf6"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.476093 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.477397 4953 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-sb7m5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" start-of-body= Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.477435 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" podUID="c90b2280-0314-4b8a-979f-d678ee9a4a98" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.479873 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-flc99" event={"ID":"f6643e75-78f7-40fc-b597-d37fa9381727","Type":"ContainerStarted","Data":"153c2de589fd56e52b87fa25e488fd9b9cafbae055ea820e39ad6d68cec07963"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.492828 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-dr5n9" event={"ID":"f725846e-4a85-49f1-9f43-bf53a4f066db","Type":"ContainerStarted","Data":"9a51978f213796fd75662cc8f4c159db35d3610b7aea02df1267539a379d948b"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.493760 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-dr5n9" Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.496952 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" event={"ID":"e7cca116-4f86-480b-9186-651912ae24d1","Type":"ContainerStarted","Data":"12710a034f55c1fac0d7aee1740e61a84ce4480fea7755b0791a2aae6b58dfb3"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.499851 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-blqz2" event={"ID":"ef7c633a-9dcf-4613-9106-3c1f07a6afab","Type":"ContainerStarted","Data":"d59a000c62536c78a9d261f65366f06f0333f927d31be106a3473a8f9cfc4bda"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.501130 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-blqz2" Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.501827 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-blqz2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.501842 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.501860 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-blqz2" podUID="ef7c633a-9dcf-4613-9106-3c1f07a6afab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 23 00:09:08 crc kubenswrapper[4953]: E0223 00:09:08.502122 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:09.002109223 +0000 UTC m=+146.935951069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.504013 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d" event={"ID":"d094612c-465f-4bec-b3f8-bf28a1471217","Type":"ContainerStarted","Data":"502a4ec1d7c012cba973c1323b69a0bbc5d8566e590bdd54e093c08be98e55f9"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.504043 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d" event={"ID":"d094612c-465f-4bec-b3f8-bf28a1471217","Type":"ContainerStarted","Data":"fadb4ddf03cc58cd8770bdd6c10cd32f7b72598282ba38c9a937f5ec922a8d28"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.509916 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mrwl8" event={"ID":"dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419","Type":"ContainerStarted","Data":"dc19bd957e0318cb8a82ddb61d753b898f39d2e466a9867f695f8dfcaffedddb"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.509953 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mrwl8" event={"ID":"dde38f0d-9b9b-4c0f-83c5-7b1f6e6a7419","Type":"ContainerStarted","Data":"20a22f90ca887aeded079b3ba655da2d72b479b743b3a690ba8d7f629810c7d4"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.516653 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t882k" event={"ID":"f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280","Type":"ContainerStarted","Data":"bb402441b98967f53217546ade0b137485336323d0cbfeab1affb13dc42a968f"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.548481 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-t92pb" event={"ID":"486601c8-1d5e-4805-8ab4-d4a55de883f9","Type":"ContainerStarted","Data":"432963ea8373b358a50dee59d6ac37635875d968dbb6aee2ea1d4ff04370fcf5"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.566367 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t5h8l" event={"ID":"957f8efa-8db6-4dd9-969d-2ec80e8c6f7a","Type":"ContainerStarted","Data":"163b1ef9bde57c026331f2ccee08199d17f7c222eadaa124c80d245253d0d864"} Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.583476 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-dr5n9" Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.604354 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:08 crc kubenswrapper[4953]: E0223 00:09:08.607394 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:09.10736899 +0000 UTC m=+147.041210876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.719472 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-23 00:04:07 +0000 UTC, rotation deadline is 2026-12-30 00:09:45.104570456 +0000 UTC Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.719750 4953 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7440h0m36.384823306s for next certificate rotation Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.725277 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:08 crc kubenswrapper[4953]: E0223 00:09:08.727096 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:09.227067499 +0000 UTC m=+147.160909415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.742890 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt" podStartSLOduration=123.742869188 podStartE2EDuration="2m3.742869188s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:08.739519407 +0000 UTC m=+146.673361263" watchObservedRunningTime="2026-02-23 00:09:08.742869188 +0000 UTC m=+146.676711034" Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.800909 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29530080-nw6hr" podStartSLOduration=123.800886983 podStartE2EDuration="2m3.800886983s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:08.784680313 +0000 UTC m=+146.718522159" watchObservedRunningTime="2026-02-23 00:09:08.800886983 +0000 UTC m=+146.734728829" Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.826847 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:08 crc kubenswrapper[4953]: E0223 00:09:08.827211 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:09.327193046 +0000 UTC m=+147.261034892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.912317 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-dr5n9" podStartSLOduration=123.912280716 podStartE2EDuration="2m3.912280716s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:08.861684852 +0000 UTC m=+146.795526698" watchObservedRunningTime="2026-02-23 00:09:08.912280716 +0000 UTC m=+146.846122562" Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.929907 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:08 crc kubenswrapper[4953]: E0223 00:09:08.930220 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:09.430209142 +0000 UTC m=+147.364050988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.950889 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g6rdl" podStartSLOduration=123.950868523 podStartE2EDuration="2m3.950868523s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:08.912280216 +0000 UTC m=+146.846122082" watchObservedRunningTime="2026-02-23 00:09:08.950868523 +0000 UTC m=+146.884710369" Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.952336 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-p96zz" podStartSLOduration=123.952329573 podStartE2EDuration="2m3.952329573s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:08.951934232 +0000 UTC m=+146.885776078" watchObservedRunningTime="2026-02-23 00:09:08.952329573 +0000 UTC m=+146.886171419" Feb 23 00:09:08 crc kubenswrapper[4953]: I0223 00:09:08.989543 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-sbhgq" podStartSLOduration=123.989521352 podStartE2EDuration="2m3.989521352s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:08.980810956 +0000 UTC m=+146.914652812" watchObservedRunningTime="2026-02-23 00:09:08.989521352 +0000 UTC m=+146.923363198" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.011206 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-xd97t" podStartSLOduration=124.011189181 podStartE2EDuration="2m4.011189181s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.009894536 +0000 UTC m=+146.943736392" watchObservedRunningTime="2026-02-23 00:09:09.011189181 +0000 UTC m=+146.945031027" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.031843 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:09 crc kubenswrapper[4953]: E0223 00:09:09.032157 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:09.532142939 +0000 UTC m=+147.465984785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.038089 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-hwdnz" podStartSLOduration=6.03806872 podStartE2EDuration="6.03806872s" podCreationTimestamp="2026-02-23 00:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.036101917 +0000 UTC m=+146.969943763" watchObservedRunningTime="2026-02-23 00:09:09.03806872 +0000 UTC m=+146.971910566" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.061725 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-blqz2" podStartSLOduration=124.061708942 podStartE2EDuration="2m4.061708942s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.059359188 +0000 UTC m=+146.993201034" watchObservedRunningTime="2026-02-23 00:09:09.061708942 +0000 UTC m=+146.995550788" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.075756 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.098491 4953 patch_prober.go:28] interesting pod/router-default-5444994796-p96zz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 00:09:09 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Feb 23 00:09:09 crc kubenswrapper[4953]: [+]process-running ok Feb 23 00:09:09 crc kubenswrapper[4953]: healthz check failed Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.098544 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p96zz" podUID="045ff803-aa45-4faa-b4ee-7f0de4093f04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.107027 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg" podStartSLOduration=124.107004532 podStartE2EDuration="2m4.107004532s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.097282708 +0000 UTC m=+147.031124554" watchObservedRunningTime="2026-02-23 00:09:09.107004532 +0000 UTC m=+147.040846388" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.131572 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-flc99" podStartSLOduration=124.131551568 podStartE2EDuration="2m4.131551568s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.129142803 +0000 UTC m=+147.062984649" watchObservedRunningTime="2026-02-23 00:09:09.131551568 +0000 UTC m=+147.065393414" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.132900 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:09 crc kubenswrapper[4953]: E0223 00:09:09.133265 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:09.633252194 +0000 UTC m=+147.567094040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.240701 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mrwl8" podStartSLOduration=124.24067016 podStartE2EDuration="2m4.24067016s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.238704427 +0000 UTC m=+147.172546273" watchObservedRunningTime="2026-02-23 00:09:09.24067016 +0000 UTC m=+147.174512006" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.242658 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:09 crc kubenswrapper[4953]: E0223 00:09:09.243095 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:09.743079755 +0000 UTC m=+147.676921601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.278216 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mdkxk" podStartSLOduration=124.278193588 podStartE2EDuration="2m4.278193588s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.277669864 +0000 UTC m=+147.211511710" watchObservedRunningTime="2026-02-23 00:09:09.278193588 +0000 UTC m=+147.212035434" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.352032 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:09 crc kubenswrapper[4953]: E0223 00:09:09.352408 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:09.852395023 +0000 UTC m=+147.786236869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.399227 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" podStartSLOduration=124.399208184 podStartE2EDuration="2m4.399208184s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.335376971 +0000 UTC m=+147.269218827" watchObservedRunningTime="2026-02-23 00:09:09.399208184 +0000 UTC m=+147.333050030" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.430057 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-dztnb" podStartSLOduration=6.43003702 podStartE2EDuration="6.43003702s" podCreationTimestamp="2026-02-23 00:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.400573101 +0000 UTC m=+147.334414957" watchObservedRunningTime="2026-02-23 00:09:09.43003702 +0000 UTC m=+147.363878866" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.453680 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:09 crc kubenswrapper[4953]: E0223 00:09:09.453826 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:09.953805356 +0000 UTC m=+147.887647202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.454357 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:09 crc kubenswrapper[4953]: E0223 00:09:09.454669 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:09.954654339 +0000 UTC m=+147.888496185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.473609 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rzc24" podStartSLOduration=124.473593453 podStartE2EDuration="2m4.473593453s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.431041568 +0000 UTC m=+147.364883424" watchObservedRunningTime="2026-02-23 00:09:09.473593453 +0000 UTC m=+147.407435299" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.511485 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-szx7x" podStartSLOduration=124.511463521 podStartE2EDuration="2m4.511463521s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.509677562 +0000 UTC m=+147.443519408" watchObservedRunningTime="2026-02-23 00:09:09.511463521 +0000 UTC m=+147.445305367" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.561182 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:09 crc kubenswrapper[4953]: E0223 00:09:09.561549 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:10.06153198 +0000 UTC m=+147.995373836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.609360 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-t92pb" event={"ID":"486601c8-1d5e-4805-8ab4-d4a55de883f9","Type":"ContainerStarted","Data":"ae13246bbbb41a0505a474dcf981486806dd251246be65259f7c44438a01fbe9"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.635702 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" event={"ID":"ae22cab2-d791-4513-8794-e5d93b7447e5","Type":"ContainerStarted","Data":"6dce8eb176193ccecee5255cafd0e1eb217f90dfb1589a0ed950fa4805d0d7df"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.636753 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.643863 4953 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zblkc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.643930 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" podUID="ae22cab2-d791-4513-8794-e5d93b7447e5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.644893 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-t92pb" podStartSLOduration=124.644881952 podStartE2EDuration="2m4.644881952s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.644485622 +0000 UTC m=+147.578327478" watchObservedRunningTime="2026-02-23 00:09:09.644881952 +0000 UTC m=+147.578723798" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.663189 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:09 crc kubenswrapper[4953]: E0223 00:09:09.663554 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:10.163542579 +0000 UTC m=+148.097384425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.675272 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gdxz7" event={"ID":"b88234f7-8355-4c3b-a5f3-6195bbe46bee","Type":"ContainerStarted","Data":"fc4838c706af8337d4c495f91b81d9b816b795e6ffcbfe8625c2b99b01861a5c"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.681563 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vr4sx" event={"ID":"03337365-2181-4cf3-90d6-6664103f220b","Type":"ContainerStarted","Data":"2e4a5ff278367e3654a19bef0b3ba8ada037d2dca4713ee48778c2e7d78a06cb"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.692304 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-blqz2" event={"ID":"ef7c633a-9dcf-4613-9106-3c1f07a6afab","Type":"ContainerStarted","Data":"bc8dc01e2b06c4c542c99837766c90b88603be4504fd21bf1a7b7ac7d3663d9a"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.694273 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-blqz2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.694326 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-blqz2" podUID="ef7c633a-9dcf-4613-9106-3c1f07a6afab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.698774 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc" event={"ID":"cb671439-af50-4c0d-8f7c-d92b3571b2b0","Type":"ContainerStarted","Data":"3e17f6d64421d4b00a6d6fa5bb3559dd399be3d00b039ebca8a020885805760b"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.702311 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" podStartSLOduration=124.70227138 podStartE2EDuration="2m4.70227138s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.70079801 +0000 UTC m=+147.634639866" watchObservedRunningTime="2026-02-23 00:09:09.70227138 +0000 UTC m=+147.636113226" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.722051 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d" event={"ID":"d094612c-465f-4bec-b3f8-bf28a1471217","Type":"ContainerStarted","Data":"f98861c3e833f8f750cd7a2ed1c637f0dada5d4af620e27ddf975df55eba5e01"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.737735 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gdxz7" podStartSLOduration=124.737718732 podStartE2EDuration="2m4.737718732s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.736325195 +0000 UTC m=+147.670167041" watchObservedRunningTime="2026-02-23 00:09:09.737718732 +0000 UTC m=+147.671560578" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.740148 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" event={"ID":"69abc20a-54cb-47c6-884d-e12fd1984fdb","Type":"ContainerStarted","Data":"a9aeffa26217ec9e8a9cc24f0ff4e153cedea183bfaa19dfaf30fc8355ec0bc5"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.740919 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.747487 4953 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pjdts container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.747551 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" podUID="69abc20a-54cb-47c6-884d-e12fd1984fdb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.763748 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:09 crc kubenswrapper[4953]: E0223 00:09:09.764238 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:10.264208952 +0000 UTC m=+148.198050788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.778661 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9x56p" event={"ID":"28f92e4d-87a5-4cb4-9f42-1301b5d4fc31","Type":"ContainerStarted","Data":"7a0c9cccbe1c65d5cb2d6a8ac4260484f02853ab35399cc21f6318c19c057172"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.784855 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc" podStartSLOduration=124.784844442 podStartE2EDuration="2m4.784844442s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.784204714 +0000 UTC m=+147.718046560" watchObservedRunningTime="2026-02-23 00:09:09.784844442 +0000 UTC m=+147.718686288" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.785965 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dzswb" event={"ID":"b63716df-3a08-4a47-8bd6-64827459651e","Type":"ContainerStarted","Data":"c2f46fc0eb842ae2ae545eeee36dd6152a0d62f683089bec357df55a2a70d64a"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.786012 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dzswb" event={"ID":"b63716df-3a08-4a47-8bd6-64827459651e","Type":"ContainerStarted","Data":"14b88fe0b7b48b8d54d765379e02bddd0bae08822c982de3c49ae1dfe0ddd794"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.800658 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-x9fkx" event={"ID":"0ff76a9d-67aa-4d7b-b21d-5df9ff4a03c2","Type":"ContainerStarted","Data":"d0d24f79a27ada0aec4f73563161c066049a144de7953a94130e027f0c6fd1ae"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.804487 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" podStartSLOduration=124.80446688399999 podStartE2EDuration="2m4.804466884s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.804153816 +0000 UTC m=+147.737995662" watchObservedRunningTime="2026-02-23 00:09:09.804466884 +0000 UTC m=+147.738308730" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.817692 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4d7k" event={"ID":"2a9783d6-f83d-4e9c-bbe8-815b994fede2","Type":"ContainerStarted","Data":"b41de36046745f360258211553ad2cc96297f7b97136d14bc42e2b29b009a499"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.827815 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t882k" event={"ID":"f9fe5f7a-039b-4b0d-9f6e-21ccef0cd280","Type":"ContainerStarted","Data":"01bbb1a26b45920ae28ef6ad47c06ad5763e280c46f21b086c84731da2110ed1"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.839326 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t4h9d" podStartSLOduration=124.83931219 podStartE2EDuration="2m4.83931219s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.838406136 +0000 UTC m=+147.772247982" watchObservedRunningTime="2026-02-23 00:09:09.83931219 +0000 UTC m=+147.773154036" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.840587 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gp45" event={"ID":"15299e81-114c-47b9-a589-1a8d78426736","Type":"ContainerStarted","Data":"d6a54c9ab4964b21edcad1ab5134a545a198bbaa3e0791f36103676287d5db5e"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.862277 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-9x56p" podStartSLOduration=124.862257363 podStartE2EDuration="2m4.862257363s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.859690003 +0000 UTC m=+147.793531849" watchObservedRunningTime="2026-02-23 00:09:09.862257363 +0000 UTC m=+147.796099209" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.863162 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f62s6" event={"ID":"ddd17504-ce95-4253-8c88-1f5cf50f9184","Type":"ContainerStarted","Data":"85a1ac00612d4e3200c3fc4894ecb3631f90c43856d009ebc1c428df48f667a0"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.866014 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:09 crc kubenswrapper[4953]: E0223 00:09:09.868931 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:10.368916854 +0000 UTC m=+148.302758770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.874926 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjp8z" event={"ID":"6b7576be-af0b-4553-bc96-125e87709ad1","Type":"ContainerStarted","Data":"dad8e87bb242eeeccadee1e9ff49b87b18284972c4c2aa59e967574fa2376f62"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.888777 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t882k" podStartSLOduration=124.888749712 podStartE2EDuration="2m4.888749712s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.885822023 +0000 UTC m=+147.819663869" watchObservedRunningTime="2026-02-23 00:09:09.888749712 +0000 UTC m=+147.822591558" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.897417 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7" event={"ID":"1b0919d1-104f-4a33-b5df-d604cfa672f2","Type":"ContainerStarted","Data":"ea5cb2d1c707d15cba2f2c216ed0ec937b0b8fd9f9420d64a3e564efb81fa8bd"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.898160 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.911245 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk" event={"ID":"2e231e84-62b1-447f-b8bd-713b1027e045","Type":"ContainerStarted","Data":"8e8a2dfc92c5cf90b7da8de7e563d6ed0f3da629dd8dee3adb21c5279612a9b7"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.911277 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk" event={"ID":"2e231e84-62b1-447f-b8bd-713b1027e045","Type":"ContainerStarted","Data":"0d75834572a39ebb950fb1eefbaa40082d37e90829b28c047ec285195b2c6399"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.911304 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk" event={"ID":"2e231e84-62b1-447f-b8bd-713b1027e045","Type":"ContainerStarted","Data":"01191bdb4c16d3f194c32e742f845d3561489728a59efbdd76c58606c6075aba"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.917394 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9sjq" event={"ID":"bd9d22e8-4e02-41dd-926d-d95d15beceeb","Type":"ContainerStarted","Data":"0297e6eee2a2de80dc2799bfee063b04ac76595d62ec9c7c9ab906614ef09494"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.917451 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9sjq" event={"ID":"bd9d22e8-4e02-41dd-926d-d95d15beceeb","Type":"ContainerStarted","Data":"6d754618be66f7b0a7e87118b5e537dfa422be12e77541942cd12e0f30ffc860"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.917858 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q4d7k" podStartSLOduration=124.917845592 podStartE2EDuration="2m4.917845592s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.916993509 +0000 UTC m=+147.850835355" watchObservedRunningTime="2026-02-23 00:09:09.917845592 +0000 UTC m=+147.851687428" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.923599 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" event={"ID":"cedfc67a-fc52-4c43-8962-619b6e436d60","Type":"ContainerStarted","Data":"dbeeb2f5fac84632b5594110b0ba35449b74bbaae4be04206e7bba8cd912e3b7"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.924449 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.942572 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" event={"ID":"be696be4-2c84-4434-9d93-804c9bb6604b","Type":"ContainerStarted","Data":"2ebef613bdd3641e48e660106ae77eea9c77a66749206f75b6dca89698ab9327"} Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.943585 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.951529 4953 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vtp7m container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.952408 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" podUID="be696be4-2c84-4434-9d93-804c9bb6604b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.967917 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.979613 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n5rdt" Feb 23 00:09:09 crc kubenswrapper[4953]: I0223 00:09:09.979847 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8gp45" podStartSLOduration=124.979832635 podStartE2EDuration="2m4.979832635s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.942960414 +0000 UTC m=+147.876802260" watchObservedRunningTime="2026-02-23 00:09:09.979832635 +0000 UTC m=+147.913674481" Feb 23 00:09:09 crc kubenswrapper[4953]: E0223 00:09:09.990030 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:10.490011371 +0000 UTC m=+148.423853217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.001093 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9gcsg" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.006693 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-x9fkx" podStartSLOduration=125.006681234 podStartE2EDuration="2m5.006681234s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:09.983536135 +0000 UTC m=+147.917377981" watchObservedRunningTime="2026-02-23 00:09:10.006681234 +0000 UTC m=+147.940523080" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.008979 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dzswb" podStartSLOduration=125.008971246 podStartE2EDuration="2m5.008971246s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:10.005548823 +0000 UTC m=+147.939390669" watchObservedRunningTime="2026-02-23 00:09:10.008971246 +0000 UTC m=+147.942813082" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.060469 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.070054 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" podStartSLOduration=125.070038443 podStartE2EDuration="2m5.070038443s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:10.039597477 +0000 UTC m=+147.973439323" watchObservedRunningTime="2026-02-23 00:09:10.070038443 +0000 UTC m=+148.003880289" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.072593 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:10 crc kubenswrapper[4953]: E0223 00:09:10.080829 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:10.580794035 +0000 UTC m=+148.514635881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.101157 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cb4fk" podStartSLOduration=125.101136698 podStartE2EDuration="2m5.101136698s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:10.100181522 +0000 UTC m=+148.034023368" watchObservedRunningTime="2026-02-23 00:09:10.101136698 +0000 UTC m=+148.034978544" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.103437 4953 patch_prober.go:28] interesting pod/router-default-5444994796-p96zz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 00:09:10 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Feb 23 00:09:10 crc kubenswrapper[4953]: [+]process-running ok Feb 23 00:09:10 crc kubenswrapper[4953]: healthz check failed Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.103675 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p96zz" podUID="045ff803-aa45-4faa-b4ee-7f0de4093f04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.140944 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" podStartSLOduration=125.140925718 podStartE2EDuration="2m5.140925718s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:10.13989458 +0000 UTC m=+148.073736426" watchObservedRunningTime="2026-02-23 00:09:10.140925718 +0000 UTC m=+148.074767564" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.173870 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:10 crc kubenswrapper[4953]: E0223 00:09:10.174154 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:10.674127669 +0000 UTC m=+148.607969515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.174525 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:10 crc kubenswrapper[4953]: E0223 00:09:10.174778 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:10.674766556 +0000 UTC m=+148.608608402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.175195 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f62s6" podStartSLOduration=125.175182998 podStartE2EDuration="2m5.175182998s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:10.17415957 +0000 UTC m=+148.108001416" watchObservedRunningTime="2026-02-23 00:09:10.175182998 +0000 UTC m=+148.109024844" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.277751 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:10 crc kubenswrapper[4953]: E0223 00:09:10.278082 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:10.77806802 +0000 UTC m=+148.711909866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.281444 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9sjq" podStartSLOduration=125.281426342 podStartE2EDuration="2m5.281426342s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:10.214537266 +0000 UTC m=+148.148379112" watchObservedRunningTime="2026-02-23 00:09:10.281426342 +0000 UTC m=+148.215268188" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.281908 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" podStartSLOduration=125.281902104 podStartE2EDuration="2m5.281902104s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:10.275011717 +0000 UTC m=+148.208853583" watchObservedRunningTime="2026-02-23 00:09:10.281902104 +0000 UTC m=+148.215743950" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.378954 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:10 crc kubenswrapper[4953]: E0223 00:09:10.379426 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:10.879407741 +0000 UTC m=+148.813249587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.402189 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7" podStartSLOduration=125.402169949 podStartE2EDuration="2m5.402169949s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:10.400110893 +0000 UTC m=+148.333952739" watchObservedRunningTime="2026-02-23 00:09:10.402169949 +0000 UTC m=+148.336011795" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.435795 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pjp8z" podStartSLOduration=125.435773191 podStartE2EDuration="2m5.435773191s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:10.428987397 +0000 UTC m=+148.362829233" watchObservedRunningTime="2026-02-23 00:09:10.435773191 +0000 UTC m=+148.369615037" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.480302 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:10 crc kubenswrapper[4953]: E0223 00:09:10.480690 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:10.98067233 +0000 UTC m=+148.914514176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.581606 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-65pkm"] Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.581758 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:10 crc kubenswrapper[4953]: E0223 00:09:10.582046 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:11.082033812 +0000 UTC m=+149.015875648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.582688 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65pkm" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.586506 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.600618 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65pkm"] Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.683161 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:10 crc kubenswrapper[4953]: E0223 00:09:10.683299 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:11.18326085 +0000 UTC m=+149.117102696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.683660 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0033d07e-7400-4307-89d8-efc2e34acee5-catalog-content\") pod \"certified-operators-65pkm\" (UID: \"0033d07e-7400-4307-89d8-efc2e34acee5\") " pod="openshift-marketplace/certified-operators-65pkm" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.683689 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.683734 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0033d07e-7400-4307-89d8-efc2e34acee5-utilities\") pod \"certified-operators-65pkm\" (UID: \"0033d07e-7400-4307-89d8-efc2e34acee5\") " pod="openshift-marketplace/certified-operators-65pkm" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.683843 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk9sp\" (UniqueName: \"kubernetes.io/projected/0033d07e-7400-4307-89d8-efc2e34acee5-kube-api-access-hk9sp\") pod \"certified-operators-65pkm\" (UID: \"0033d07e-7400-4307-89d8-efc2e34acee5\") " pod="openshift-marketplace/certified-operators-65pkm" Feb 23 00:09:10 crc kubenswrapper[4953]: E0223 00:09:10.684133 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:11.184114033 +0000 UTC m=+149.117955879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.785423 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:10 crc kubenswrapper[4953]: E0223 00:09:10.785617 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:11.285590637 +0000 UTC m=+149.219432483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.785987 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0033d07e-7400-4307-89d8-efc2e34acee5-utilities\") pod \"certified-operators-65pkm\" (UID: \"0033d07e-7400-4307-89d8-efc2e34acee5\") " pod="openshift-marketplace/certified-operators-65pkm" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.786023 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk9sp\" (UniqueName: \"kubernetes.io/projected/0033d07e-7400-4307-89d8-efc2e34acee5-kube-api-access-hk9sp\") pod \"certified-operators-65pkm\" (UID: \"0033d07e-7400-4307-89d8-efc2e34acee5\") " pod="openshift-marketplace/certified-operators-65pkm" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.786100 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0033d07e-7400-4307-89d8-efc2e34acee5-catalog-content\") pod \"certified-operators-65pkm\" (UID: \"0033d07e-7400-4307-89d8-efc2e34acee5\") " pod="openshift-marketplace/certified-operators-65pkm" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.786120 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:10 crc kubenswrapper[4953]: E0223 00:09:10.786401 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:11.286389409 +0000 UTC m=+149.220231255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.786822 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0033d07e-7400-4307-89d8-efc2e34acee5-utilities\") pod \"certified-operators-65pkm\" (UID: \"0033d07e-7400-4307-89d8-efc2e34acee5\") " pod="openshift-marketplace/certified-operators-65pkm" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.786995 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0033d07e-7400-4307-89d8-efc2e34acee5-catalog-content\") pod \"certified-operators-65pkm\" (UID: \"0033d07e-7400-4307-89d8-efc2e34acee5\") " pod="openshift-marketplace/certified-operators-65pkm" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.822987 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk9sp\" (UniqueName: \"kubernetes.io/projected/0033d07e-7400-4307-89d8-efc2e34acee5-kube-api-access-hk9sp\") pod \"certified-operators-65pkm\" (UID: \"0033d07e-7400-4307-89d8-efc2e34acee5\") " pod="openshift-marketplace/certified-operators-65pkm" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.887457 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:10 crc kubenswrapper[4953]: E0223 00:09:10.887624 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:11.387598886 +0000 UTC m=+149.321440722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.887757 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:10 crc kubenswrapper[4953]: E0223 00:09:10.888084 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:11.388073309 +0000 UTC m=+149.321915155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.897508 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65pkm" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.924565 4953 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-kqn62 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.924623 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" podUID="cedfc67a-fc52-4c43-8962-619b6e436d60" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.939124 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.939509 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.971924 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9x56p" event={"ID":"28f92e4d-87a5-4cb4-9f42-1301b5d4fc31","Type":"ContainerStarted","Data":"cb958b3c8dd342d6602a60a4afc4cc66eb37f6faa86269dafc57279305e41ad1"} Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.984235 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" event={"ID":"7402d900-4b66-4c9d-8904-b35c3d4b06c7","Type":"ContainerStarted","Data":"af91d637c25bfbb09da7dc9478a9fc497bc70eeb33c516da9d7f146c3edb7e88"} Feb 23 00:09:10 crc kubenswrapper[4953]: I0223 00:09:10.988408 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:10 crc kubenswrapper[4953]: E0223 00:09:10.988745 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:11.488728862 +0000 UTC m=+149.422570708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.001005 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bfrxz"] Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.001963 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfrxz" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.019647 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" event={"ID":"e7cca116-4f86-480b-9186-651912ae24d1","Type":"ContainerStarted","Data":"cd40251956d688d143d55f54cd321db67fb4404944a13c76abe59c8af22e7e99"} Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.025745 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bfrxz"] Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.035043 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vr4sx" event={"ID":"03337365-2181-4cf3-90d6-6664103f220b","Type":"ContainerStarted","Data":"4d264484e19faa25260b70bb50d2b1c771a717a7b08c2f3d4a17624d2cc59944"} Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.035499 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-vr4sx" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.060318 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-x9fkx" event={"ID":"0ff76a9d-67aa-4d7b-b21d-5df9ff4a03c2","Type":"ContainerStarted","Data":"f63d532106ecbec3f197561152f9d66f919d10f70b51a76455c4a149552c77cf"} Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.065579 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-vr4sx" podStartSLOduration=8.065563247 podStartE2EDuration="8.065563247s" podCreationTimestamp="2026-02-23 00:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:11.064033536 +0000 UTC m=+148.997875392" watchObservedRunningTime="2026-02-23 00:09:11.065563247 +0000 UTC m=+148.999405093" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.075153 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t5h8l" event={"ID":"957f8efa-8db6-4dd9-969d-2ec80e8c6f7a","Type":"ContainerStarted","Data":"553281d9d9074f4e90f0a23483807fc40a4f6893ec25bf49dbf29753be5b5514"} Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.085521 4953 patch_prober.go:28] interesting pod/router-default-5444994796-p96zz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 00:09:11 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Feb 23 00:09:11 crc kubenswrapper[4953]: [+]process-running ok Feb 23 00:09:11 crc kubenswrapper[4953]: healthz check failed Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.085576 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p96zz" podUID="045ff803-aa45-4faa-b4ee-7f0de4093f04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.088892 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" event={"ID":"0f48e137-1959-43ff-8386-e7560140f2d4","Type":"ContainerStarted","Data":"94da39296a6bd3f08fb5734d0bd9155b6386073144c8a93e7198d45c32eeb8fe"} Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.088937 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" event={"ID":"0f48e137-1959-43ff-8386-e7560140f2d4","Type":"ContainerStarted","Data":"55a797df7eecc94f553c66f73411270331e7a5f24981a7f1cd2571541a418326"} Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.092458 4953 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pjdts container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.092498 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" podUID="69abc20a-54cb-47c6-884d-e12fd1984fdb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.093033 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-blqz2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.093060 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-blqz2" podUID="ef7c633a-9dcf-4613-9106-3c1f07a6afab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.094306 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ff8dec-87f8-49c9-a006-8134bca4e36f-catalog-content\") pod \"certified-operators-bfrxz\" (UID: \"90ff8dec-87f8-49c9-a006-8134bca4e36f\") " pod="openshift-marketplace/certified-operators-bfrxz" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.094340 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmb2b\" (UniqueName: \"kubernetes.io/projected/90ff8dec-87f8-49c9-a006-8134bca4e36f-kube-api-access-nmb2b\") pod \"certified-operators-bfrxz\" (UID: \"90ff8dec-87f8-49c9-a006-8134bca4e36f\") " pod="openshift-marketplace/certified-operators-bfrxz" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.094422 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ff8dec-87f8-49c9-a006-8134bca4e36f-utilities\") pod \"certified-operators-bfrxz\" (UID: \"90ff8dec-87f8-49c9-a006-8134bca4e36f\") " pod="openshift-marketplace/certified-operators-bfrxz" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.094455 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:11 crc kubenswrapper[4953]: E0223 00:09:11.095499 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:11.59548566 +0000 UTC m=+149.529327506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.105867 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.106275 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.108442 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqn62" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.108786 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-t5h8l" podStartSLOduration=126.10875607 podStartE2EDuration="2m6.10875607s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:11.108704158 +0000 UTC m=+149.042546004" watchObservedRunningTime="2026-02-23 00:09:11.10875607 +0000 UTC m=+149.042597916" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.188221 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-44v2s"] Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.198090 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.198535 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ff8dec-87f8-49c9-a006-8134bca4e36f-utilities\") pod \"certified-operators-bfrxz\" (UID: \"90ff8dec-87f8-49c9-a006-8134bca4e36f\") " pod="openshift-marketplace/certified-operators-bfrxz" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.213565 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-44v2s" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.216250 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ff8dec-87f8-49c9-a006-8134bca4e36f-catalog-content\") pod \"certified-operators-bfrxz\" (UID: \"90ff8dec-87f8-49c9-a006-8134bca4e36f\") " pod="openshift-marketplace/certified-operators-bfrxz" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.216322 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmb2b\" (UniqueName: \"kubernetes.io/projected/90ff8dec-87f8-49c9-a006-8134bca4e36f-kube-api-access-nmb2b\") pod \"certified-operators-bfrxz\" (UID: \"90ff8dec-87f8-49c9-a006-8134bca4e36f\") " pod="openshift-marketplace/certified-operators-bfrxz" Feb 23 00:09:11 crc kubenswrapper[4953]: E0223 00:09:11.217542 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:11.717525422 +0000 UTC m=+149.651367268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.221668 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ff8dec-87f8-49c9-a006-8134bca4e36f-utilities\") pod \"certified-operators-bfrxz\" (UID: \"90ff8dec-87f8-49c9-a006-8134bca4e36f\") " pod="openshift-marketplace/certified-operators-bfrxz" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.277494 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ff8dec-87f8-49c9-a006-8134bca4e36f-catalog-content\") pod \"certified-operators-bfrxz\" (UID: \"90ff8dec-87f8-49c9-a006-8134bca4e36f\") " pod="openshift-marketplace/certified-operators-bfrxz" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.284013 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-44v2s"] Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.286241 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.320722 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b82e6868-6070-4c8b-9564-c7f0ae98c951-utilities\") pod \"community-operators-44v2s\" (UID: \"b82e6868-6070-4c8b-9564-c7f0ae98c951\") " pod="openshift-marketplace/community-operators-44v2s" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.321216 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.321269 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b82e6868-6070-4c8b-9564-c7f0ae98c951-catalog-content\") pod \"community-operators-44v2s\" (UID: \"b82e6868-6070-4c8b-9564-c7f0ae98c951\") " pod="openshift-marketplace/community-operators-44v2s" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.321309 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncxvp\" (UniqueName: \"kubernetes.io/projected/b82e6868-6070-4c8b-9564-c7f0ae98c951-kube-api-access-ncxvp\") pod \"community-operators-44v2s\" (UID: \"b82e6868-6070-4c8b-9564-c7f0ae98c951\") " pod="openshift-marketplace/community-operators-44v2s" Feb 23 00:09:11 crc kubenswrapper[4953]: E0223 00:09:11.321639 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:11.821626248 +0000 UTC m=+149.755468094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.354354 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmb2b\" (UniqueName: \"kubernetes.io/projected/90ff8dec-87f8-49c9-a006-8134bca4e36f-kube-api-access-nmb2b\") pod \"certified-operators-bfrxz\" (UID: \"90ff8dec-87f8-49c9-a006-8134bca4e36f\") " pod="openshift-marketplace/certified-operators-bfrxz" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.419789 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" podStartSLOduration=126.419775652 podStartE2EDuration="2m6.419775652s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:11.417067839 +0000 UTC m=+149.350909695" watchObservedRunningTime="2026-02-23 00:09:11.419775652 +0000 UTC m=+149.353617498" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.425264 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.425465 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b82e6868-6070-4c8b-9564-c7f0ae98c951-catalog-content\") pod \"community-operators-44v2s\" (UID: \"b82e6868-6070-4c8b-9564-c7f0ae98c951\") " pod="openshift-marketplace/community-operators-44v2s" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.425492 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncxvp\" (UniqueName: \"kubernetes.io/projected/b82e6868-6070-4c8b-9564-c7f0ae98c951-kube-api-access-ncxvp\") pod \"community-operators-44v2s\" (UID: \"b82e6868-6070-4c8b-9564-c7f0ae98c951\") " pod="openshift-marketplace/community-operators-44v2s" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.425520 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b82e6868-6070-4c8b-9564-c7f0ae98c951-utilities\") pod \"community-operators-44v2s\" (UID: \"b82e6868-6070-4c8b-9564-c7f0ae98c951\") " pod="openshift-marketplace/community-operators-44v2s" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.425932 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b82e6868-6070-4c8b-9564-c7f0ae98c951-utilities\") pod \"community-operators-44v2s\" (UID: \"b82e6868-6070-4c8b-9564-c7f0ae98c951\") " pod="openshift-marketplace/community-operators-44v2s" Feb 23 00:09:11 crc kubenswrapper[4953]: E0223 00:09:11.425995 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:11.925981571 +0000 UTC m=+149.859823417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.426184 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b82e6868-6070-4c8b-9564-c7f0ae98c951-catalog-content\") pod \"community-operators-44v2s\" (UID: \"b82e6868-6070-4c8b-9564-c7f0ae98c951\") " pod="openshift-marketplace/community-operators-44v2s" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.437382 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q77kp"] Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.450102 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q77kp" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.490971 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncxvp\" (UniqueName: \"kubernetes.io/projected/b82e6868-6070-4c8b-9564-c7f0ae98c951-kube-api-access-ncxvp\") pod \"community-operators-44v2s\" (UID: \"b82e6868-6070-4c8b-9564-c7f0ae98c951\") " pod="openshift-marketplace/community-operators-44v2s" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.508352 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q77kp"] Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.528013 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.528075 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/153fb1b2-654f-412d-8c1f-e4f6c48f967f-catalog-content\") pod \"community-operators-q77kp\" (UID: \"153fb1b2-654f-412d-8c1f-e4f6c48f967f\") " pod="openshift-marketplace/community-operators-q77kp" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.528111 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/153fb1b2-654f-412d-8c1f-e4f6c48f967f-utilities\") pod \"community-operators-q77kp\" (UID: \"153fb1b2-654f-412d-8c1f-e4f6c48f967f\") " pod="openshift-marketplace/community-operators-q77kp" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.528135 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6kdk\" (UniqueName: \"kubernetes.io/projected/153fb1b2-654f-412d-8c1f-e4f6c48f967f-kube-api-access-l6kdk\") pod \"community-operators-q77kp\" (UID: \"153fb1b2-654f-412d-8c1f-e4f6c48f967f\") " pod="openshift-marketplace/community-operators-q77kp" Feb 23 00:09:11 crc kubenswrapper[4953]: E0223 00:09:11.528445 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:12.028432952 +0000 UTC m=+149.962274798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.619353 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfrxz" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.631138 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.631396 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/153fb1b2-654f-412d-8c1f-e4f6c48f967f-catalog-content\") pod \"community-operators-q77kp\" (UID: \"153fb1b2-654f-412d-8c1f-e4f6c48f967f\") " pod="openshift-marketplace/community-operators-q77kp" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.631437 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/153fb1b2-654f-412d-8c1f-e4f6c48f967f-utilities\") pod \"community-operators-q77kp\" (UID: \"153fb1b2-654f-412d-8c1f-e4f6c48f967f\") " pod="openshift-marketplace/community-operators-q77kp" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.631459 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6kdk\" (UniqueName: \"kubernetes.io/projected/153fb1b2-654f-412d-8c1f-e4f6c48f967f-kube-api-access-l6kdk\") pod \"community-operators-q77kp\" (UID: \"153fb1b2-654f-412d-8c1f-e4f6c48f967f\") " pod="openshift-marketplace/community-operators-q77kp" Feb 23 00:09:11 crc kubenswrapper[4953]: E0223 00:09:11.631921 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:12.131905041 +0000 UTC m=+150.065746887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.632341 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/153fb1b2-654f-412d-8c1f-e4f6c48f967f-catalog-content\") pod \"community-operators-q77kp\" (UID: \"153fb1b2-654f-412d-8c1f-e4f6c48f967f\") " pod="openshift-marketplace/community-operators-q77kp" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.632691 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/153fb1b2-654f-412d-8c1f-e4f6c48f967f-utilities\") pod \"community-operators-q77kp\" (UID: \"153fb1b2-654f-412d-8c1f-e4f6c48f967f\") " pod="openshift-marketplace/community-operators-q77kp" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.684488 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-44v2s" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.713205 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6kdk\" (UniqueName: \"kubernetes.io/projected/153fb1b2-654f-412d-8c1f-e4f6c48f967f-kube-api-access-l6kdk\") pod \"community-operators-q77kp\" (UID: \"153fb1b2-654f-412d-8c1f-e4f6c48f967f\") " pod="openshift-marketplace/community-operators-q77kp" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.733807 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:11 crc kubenswrapper[4953]: E0223 00:09:11.734422 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:12.234407373 +0000 UTC m=+150.168249219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.737614 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65pkm"] Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.838549 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:11 crc kubenswrapper[4953]: E0223 00:09:11.838937 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:12.33892039 +0000 UTC m=+150.272762226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.839176 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:11 crc kubenswrapper[4953]: E0223 00:09:11.839474 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:12.339467255 +0000 UTC m=+150.273309101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.876226 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q77kp" Feb 23 00:09:11 crc kubenswrapper[4953]: I0223 00:09:11.944791 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:11 crc kubenswrapper[4953]: E0223 00:09:11.945086 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:12.445071242 +0000 UTC m=+150.378913088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.038604 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.054530 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:12 crc kubenswrapper[4953]: E0223 00:09:12.054915 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:12.554865332 +0000 UTC m=+150.488707178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.104868 4953 patch_prober.go:28] interesting pod/router-default-5444994796-p96zz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 00:09:12 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Feb 23 00:09:12 crc kubenswrapper[4953]: [+]process-running ok Feb 23 00:09:12 crc kubenswrapper[4953]: healthz check failed Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.104905 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p96zz" podUID="045ff803-aa45-4faa-b4ee-7f0de4093f04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.124476 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" event={"ID":"e7cca116-4f86-480b-9186-651912ae24d1","Type":"ContainerStarted","Data":"7d1a7cfe20968db2936477be10c0eafd3ffe234ec12f3540b2f1cc790c56c2ba"} Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.133714 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65pkm" event={"ID":"0033d07e-7400-4307-89d8-efc2e34acee5","Type":"ContainerStarted","Data":"0c3cc4a26772079614dce3804e59346da95853edb5cf9af21202be37625bdbe4"} Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.145000 4953 generic.go:334] "Generic (PLEG): container finished" podID="cb671439-af50-4c0d-8f7c-d92b3571b2b0" containerID="3e17f6d64421d4b00a6d6fa5bb3559dd399be3d00b039ebca8a020885805760b" exitCode=0 Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.146704 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc" event={"ID":"cb671439-af50-4c0d-8f7c-d92b3571b2b0","Type":"ContainerDied","Data":"3e17f6d64421d4b00a6d6fa5bb3559dd399be3d00b039ebca8a020885805760b"} Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.148718 4953 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pjdts container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.148758 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" podUID="69abc20a-54cb-47c6-884d-e12fd1984fdb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.162677 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:12 crc kubenswrapper[4953]: E0223 00:09:12.163088 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:12.663073499 +0000 UTC m=+150.596915346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.184401 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zh26" Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.251973 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bfrxz"] Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.268271 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:12 crc kubenswrapper[4953]: E0223 00:09:12.307248 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:12.807230683 +0000 UTC m=+150.741072529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.371110 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:12 crc kubenswrapper[4953]: E0223 00:09:12.372042 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:12.872026132 +0000 UTC m=+150.805867978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.474119 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:12 crc kubenswrapper[4953]: E0223 00:09:12.474479 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:12.974467801 +0000 UTC m=+150.908309647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.575752 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:12 crc kubenswrapper[4953]: E0223 00:09:12.575847 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:13.075829043 +0000 UTC m=+151.009670889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.576186 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:12 crc kubenswrapper[4953]: E0223 00:09:12.576481 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:13.0764748 +0000 UTC m=+151.010316646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.641057 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q77kp"] Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.670835 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-44v2s"] Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.677859 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:12 crc kubenswrapper[4953]: E0223 00:09:12.678300 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:13.178260833 +0000 UTC m=+151.112102679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:12 crc kubenswrapper[4953]: W0223 00:09:12.679898 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod153fb1b2_654f_412d_8c1f_e4f6c48f967f.slice/crio-922e9a8876f8dcb315e6981839c6519cef9cf9df311abc15e7a3b51207bb8752 WatchSource:0}: Error finding container 922e9a8876f8dcb315e6981839c6519cef9cf9df311abc15e7a3b51207bb8752: Status 404 returned error can't find the container with id 922e9a8876f8dcb315e6981839c6519cef9cf9df311abc15e7a3b51207bb8752 Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.766046 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cj6c7" Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.780506 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:12 crc kubenswrapper[4953]: E0223 00:09:12.780993 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:13.280978202 +0000 UTC m=+151.214820048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.881720 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:12 crc kubenswrapper[4953]: E0223 00:09:12.881844 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:13.381817159 +0000 UTC m=+151.315659005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.881993 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:12 crc kubenswrapper[4953]: E0223 00:09:12.882269 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:13.382257511 +0000 UTC m=+151.316099357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.971718 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l7fhk"] Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.972786 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7fhk" Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.976116 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.982715 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:12 crc kubenswrapper[4953]: E0223 00:09:12.982872 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:13.482847841 +0000 UTC m=+151.416689687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.982967 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:12 crc kubenswrapper[4953]: E0223 00:09:12.983274 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:13.483260733 +0000 UTC m=+151.417102579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:12 crc kubenswrapper[4953]: I0223 00:09:12.995916 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7fhk"] Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.084116 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.084350 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec579bf-d684-4ab1-a91c-366365920404-utilities\") pod \"redhat-marketplace-l7fhk\" (UID: \"aec579bf-d684-4ab1-a91c-366365920404\") " pod="openshift-marketplace/redhat-marketplace-l7fhk" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.084420 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec579bf-d684-4ab1-a91c-366365920404-catalog-content\") pod \"redhat-marketplace-l7fhk\" (UID: \"aec579bf-d684-4ab1-a91c-366365920404\") " pod="openshift-marketplace/redhat-marketplace-l7fhk" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.084552 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pch4l\" (UniqueName: \"kubernetes.io/projected/aec579bf-d684-4ab1-a91c-366365920404-kube-api-access-pch4l\") pod \"redhat-marketplace-l7fhk\" (UID: \"aec579bf-d684-4ab1-a91c-366365920404\") " pod="openshift-marketplace/redhat-marketplace-l7fhk" Feb 23 00:09:13 crc kubenswrapper[4953]: E0223 00:09:13.084668 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:13.584650605 +0000 UTC m=+151.518492461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.088830 4953 patch_prober.go:28] interesting pod/router-default-5444994796-p96zz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 00:09:13 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Feb 23 00:09:13 crc kubenswrapper[4953]: [+]process-running ok Feb 23 00:09:13 crc kubenswrapper[4953]: healthz check failed Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.088871 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p96zz" podUID="045ff803-aa45-4faa-b4ee-7f0de4093f04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.153120 4953 generic.go:334] "Generic (PLEG): container finished" podID="90ff8dec-87f8-49c9-a006-8134bca4e36f" containerID="ba11f2cdd2dd81496ac3dc550a5f252857ca7723ca241fe6bc37899c300892a0" exitCode=0 Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.153226 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfrxz" event={"ID":"90ff8dec-87f8-49c9-a006-8134bca4e36f","Type":"ContainerDied","Data":"ba11f2cdd2dd81496ac3dc550a5f252857ca7723ca241fe6bc37899c300892a0"} Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.153487 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfrxz" event={"ID":"90ff8dec-87f8-49c9-a006-8134bca4e36f","Type":"ContainerStarted","Data":"129a39c40670d6f4648586ee01e52402800b630ff01cd70b46c8a44cf87c5427"} Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.155203 4953 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.155548 4953 generic.go:334] "Generic (PLEG): container finished" podID="0033d07e-7400-4307-89d8-efc2e34acee5" containerID="e4834df52118b1f23e3aaaaecb593ecf6827bff48f27a3bfea4874a6345268bf" exitCode=0 Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.155590 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65pkm" event={"ID":"0033d07e-7400-4307-89d8-efc2e34acee5","Type":"ContainerDied","Data":"e4834df52118b1f23e3aaaaecb593ecf6827bff48f27a3bfea4874a6345268bf"} Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.159416 4953 generic.go:334] "Generic (PLEG): container finished" podID="b82e6868-6070-4c8b-9564-c7f0ae98c951" containerID="e495f190f847f390d71b438836db8776b870658649c4f332c7211f799d1771d5" exitCode=0 Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.159480 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44v2s" event={"ID":"b82e6868-6070-4c8b-9564-c7f0ae98c951","Type":"ContainerDied","Data":"e495f190f847f390d71b438836db8776b870658649c4f332c7211f799d1771d5"} Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.159736 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44v2s" event={"ID":"b82e6868-6070-4c8b-9564-c7f0ae98c951","Type":"ContainerStarted","Data":"106af232ddca2a8e1acf0e6f25ad6cb3426470d4fc23b6ba1d098a7b894eeeb6"} Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.163233 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" event={"ID":"e7cca116-4f86-480b-9186-651912ae24d1","Type":"ContainerStarted","Data":"dc96e3a32485d3b367b943f01cce0dd1644fdae358e0a7794890a7bf62bd6035"} Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.163301 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" event={"ID":"e7cca116-4f86-480b-9186-651912ae24d1","Type":"ContainerStarted","Data":"242d198cab7d166151a760c0da4292beefd25906f40550cc72f3c0bca21c8638"} Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.164750 4953 generic.go:334] "Generic (PLEG): container finished" podID="153fb1b2-654f-412d-8c1f-e4f6c48f967f" containerID="ae763557ff0c86a17c50a20076ba3482df4efb0222773a6e3b495060eae34099" exitCode=0 Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.164839 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q77kp" event={"ID":"153fb1b2-654f-412d-8c1f-e4f6c48f967f","Type":"ContainerDied","Data":"ae763557ff0c86a17c50a20076ba3482df4efb0222773a6e3b495060eae34099"} Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.164867 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q77kp" event={"ID":"153fb1b2-654f-412d-8c1f-e4f6c48f967f","Type":"ContainerStarted","Data":"922e9a8876f8dcb315e6981839c6519cef9cf9df311abc15e7a3b51207bb8752"} Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.186049 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.186116 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pch4l\" (UniqueName: \"kubernetes.io/projected/aec579bf-d684-4ab1-a91c-366365920404-kube-api-access-pch4l\") pod \"redhat-marketplace-l7fhk\" (UID: \"aec579bf-d684-4ab1-a91c-366365920404\") " pod="openshift-marketplace/redhat-marketplace-l7fhk" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.186141 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec579bf-d684-4ab1-a91c-366365920404-utilities\") pod \"redhat-marketplace-l7fhk\" (UID: \"aec579bf-d684-4ab1-a91c-366365920404\") " pod="openshift-marketplace/redhat-marketplace-l7fhk" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.186179 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec579bf-d684-4ab1-a91c-366365920404-catalog-content\") pod \"redhat-marketplace-l7fhk\" (UID: \"aec579bf-d684-4ab1-a91c-366365920404\") " pod="openshift-marketplace/redhat-marketplace-l7fhk" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.186741 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec579bf-d684-4ab1-a91c-366365920404-catalog-content\") pod \"redhat-marketplace-l7fhk\" (UID: \"aec579bf-d684-4ab1-a91c-366365920404\") " pod="openshift-marketplace/redhat-marketplace-l7fhk" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.187009 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec579bf-d684-4ab1-a91c-366365920404-utilities\") pod \"redhat-marketplace-l7fhk\" (UID: \"aec579bf-d684-4ab1-a91c-366365920404\") " pod="openshift-marketplace/redhat-marketplace-l7fhk" Feb 23 00:09:13 crc kubenswrapper[4953]: E0223 00:09:13.187142 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:13.687123857 +0000 UTC m=+151.620965793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.222446 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pch4l\" (UniqueName: \"kubernetes.io/projected/aec579bf-d684-4ab1-a91c-366365920404-kube-api-access-pch4l\") pod \"redhat-marketplace-l7fhk\" (UID: \"aec579bf-d684-4ab1-a91c-366365920404\") " pod="openshift-marketplace/redhat-marketplace-l7fhk" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.285146 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7fhk" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.287439 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:13 crc kubenswrapper[4953]: E0223 00:09:13.287577 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:13.787557463 +0000 UTC m=+151.721399309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.287793 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.287905 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.287929 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.288838 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.289462 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:09:13 crc kubenswrapper[4953]: E0223 00:09:13.290470 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:13.790457882 +0000 UTC m=+151.724299728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.298937 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.300919 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.305154 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.309522 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.337789 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-n5dbh" podStartSLOduration=10.337772756 podStartE2EDuration="10.337772756s" podCreationTimestamp="2026-02-23 00:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:13.336640155 +0000 UTC m=+151.270482001" watchObservedRunningTime="2026-02-23 00:09:13.337772756 +0000 UTC m=+151.271614602" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.382682 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bt7vz"] Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.384519 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bt7vz" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.398143 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:13 crc kubenswrapper[4953]: E0223 00:09:13.398654 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:13.898636358 +0000 UTC m=+151.832478204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.415174 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bt7vz"] Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.454296 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.457879 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.500164 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.500244 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/878d6a76-f92a-46cc-aaad-4b4a2fba9574-utilities\") pod \"redhat-marketplace-bt7vz\" (UID: \"878d6a76-f92a-46cc-aaad-4b4a2fba9574\") " pod="openshift-marketplace/redhat-marketplace-bt7vz" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.500274 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/878d6a76-f92a-46cc-aaad-4b4a2fba9574-catalog-content\") pod \"redhat-marketplace-bt7vz\" (UID: \"878d6a76-f92a-46cc-aaad-4b4a2fba9574\") " pod="openshift-marketplace/redhat-marketplace-bt7vz" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.500307 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7zdl\" (UniqueName: \"kubernetes.io/projected/878d6a76-f92a-46cc-aaad-4b4a2fba9574-kube-api-access-c7zdl\") pod \"redhat-marketplace-bt7vz\" (UID: \"878d6a76-f92a-46cc-aaad-4b4a2fba9574\") " pod="openshift-marketplace/redhat-marketplace-bt7vz" Feb 23 00:09:13 crc kubenswrapper[4953]: E0223 00:09:13.500576 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:14.000563805 +0000 UTC m=+151.934405651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.562549 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.601115 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:13 crc kubenswrapper[4953]: E0223 00:09:13.604470 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:14.104437895 +0000 UTC m=+152.038279741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.601302 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/878d6a76-f92a-46cc-aaad-4b4a2fba9574-utilities\") pod \"redhat-marketplace-bt7vz\" (UID: \"878d6a76-f92a-46cc-aaad-4b4a2fba9574\") " pod="openshift-marketplace/redhat-marketplace-bt7vz" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.605071 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/878d6a76-f92a-46cc-aaad-4b4a2fba9574-catalog-content\") pod \"redhat-marketplace-bt7vz\" (UID: \"878d6a76-f92a-46cc-aaad-4b4a2fba9574\") " pod="openshift-marketplace/redhat-marketplace-bt7vz" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.605110 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7zdl\" (UniqueName: \"kubernetes.io/projected/878d6a76-f92a-46cc-aaad-4b4a2fba9574-kube-api-access-c7zdl\") pod \"redhat-marketplace-bt7vz\" (UID: \"878d6a76-f92a-46cc-aaad-4b4a2fba9574\") " pod="openshift-marketplace/redhat-marketplace-bt7vz" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.605198 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:13 crc kubenswrapper[4953]: E0223 00:09:13.605546 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:14.105532405 +0000 UTC m=+152.039374251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.605997 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/878d6a76-f92a-46cc-aaad-4b4a2fba9574-utilities\") pod \"redhat-marketplace-bt7vz\" (UID: \"878d6a76-f92a-46cc-aaad-4b4a2fba9574\") " pod="openshift-marketplace/redhat-marketplace-bt7vz" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.606261 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/878d6a76-f92a-46cc-aaad-4b4a2fba9574-catalog-content\") pod \"redhat-marketplace-bt7vz\" (UID: \"878d6a76-f92a-46cc-aaad-4b4a2fba9574\") " pod="openshift-marketplace/redhat-marketplace-bt7vz" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.618731 4953 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.627934 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7zdl\" (UniqueName: \"kubernetes.io/projected/878d6a76-f92a-46cc-aaad-4b4a2fba9574-kube-api-access-c7zdl\") pod \"redhat-marketplace-bt7vz\" (UID: \"878d6a76-f92a-46cc-aaad-4b4a2fba9574\") " pod="openshift-marketplace/redhat-marketplace-bt7vz" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.709769 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:13 crc kubenswrapper[4953]: E0223 00:09:13.710029 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:14.21000196 +0000 UTC m=+152.143843816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.710247 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:13 crc kubenswrapper[4953]: E0223 00:09:13.710932 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:14.210920045 +0000 UTC m=+152.144761901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.730374 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bt7vz" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.769268 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.816110 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:13 crc kubenswrapper[4953]: E0223 00:09:13.816576 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:14.316557683 +0000 UTC m=+152.250399529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.911428 4953 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-23T00:09:13.618758114Z","Handler":null,"Name":""} Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.917163 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qzkm\" (UniqueName: \"kubernetes.io/projected/cb671439-af50-4c0d-8f7c-d92b3571b2b0-kube-api-access-5qzkm\") pod \"cb671439-af50-4c0d-8f7c-d92b3571b2b0\" (UID: \"cb671439-af50-4c0d-8f7c-d92b3571b2b0\") " Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.917267 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb671439-af50-4c0d-8f7c-d92b3571b2b0-config-volume\") pod \"cb671439-af50-4c0d-8f7c-d92b3571b2b0\" (UID: \"cb671439-af50-4c0d-8f7c-d92b3571b2b0\") " Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.917336 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb671439-af50-4c0d-8f7c-d92b3571b2b0-secret-volume\") pod \"cb671439-af50-4c0d-8f7c-d92b3571b2b0\" (UID: \"cb671439-af50-4c0d-8f7c-d92b3571b2b0\") " Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.917640 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:13 crc kubenswrapper[4953]: E0223 00:09:13.917942 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:14.417931565 +0000 UTC m=+152.351773411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.918852 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb671439-af50-4c0d-8f7c-d92b3571b2b0-config-volume" (OuterVolumeSpecName: "config-volume") pod "cb671439-af50-4c0d-8f7c-d92b3571b2b0" (UID: "cb671439-af50-4c0d-8f7c-d92b3571b2b0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.945273 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb671439-af50-4c0d-8f7c-d92b3571b2b0-kube-api-access-5qzkm" (OuterVolumeSpecName: "kube-api-access-5qzkm") pod "cb671439-af50-4c0d-8f7c-d92b3571b2b0" (UID: "cb671439-af50-4c0d-8f7c-d92b3571b2b0"). InnerVolumeSpecName "kube-api-access-5qzkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.946082 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb671439-af50-4c0d-8f7c-d92b3571b2b0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cb671439-af50-4c0d-8f7c-d92b3571b2b0" (UID: "cb671439-af50-4c0d-8f7c-d92b3571b2b0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.947833 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7fhk"] Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.974067 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pp2fr"] Feb 23 00:09:13 crc kubenswrapper[4953]: E0223 00:09:13.974304 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb671439-af50-4c0d-8f7c-d92b3571b2b0" containerName="collect-profiles" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.974316 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb671439-af50-4c0d-8f7c-d92b3571b2b0" containerName="collect-profiles" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.974442 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb671439-af50-4c0d-8f7c-d92b3571b2b0" containerName="collect-profiles" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.975131 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pp2fr" Feb 23 00:09:13 crc kubenswrapper[4953]: I0223 00:09:13.977436 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:13.993013 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pp2fr"] Feb 23 00:09:14 crc kubenswrapper[4953]: W0223 00:09:14.007780 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaec579bf_d684_4ab1_a91c_366365920404.slice/crio-d094bfe3bf464e56481491c77b33d8dfe923903ba87d1cd505c4add7bd43f81d WatchSource:0}: Error finding container d094bfe3bf464e56481491c77b33d8dfe923903ba87d1cd505c4add7bd43f81d: Status 404 returned error can't find the container with id d094bfe3bf464e56481491c77b33d8dfe923903ba87d1cd505c4add7bd43f81d Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.019533 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:14 crc kubenswrapper[4953]: E0223 00:09:14.019887 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 00:09:14.519853942 +0000 UTC m=+152.453695788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.021104 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.023037 4953 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb671439-af50-4c0d-8f7c-d92b3571b2b0-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.023063 4953 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb671439-af50-4c0d-8f7c-d92b3571b2b0-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.023075 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qzkm\" (UniqueName: \"kubernetes.io/projected/cb671439-af50-4c0d-8f7c-d92b3571b2b0-kube-api-access-5qzkm\") on node \"crc\" DevicePath \"\"" Feb 23 00:09:14 crc kubenswrapper[4953]: E0223 00:09:14.026668 4953 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 00:09:14.526651636 +0000 UTC m=+152.460493482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwcpz" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.044598 4953 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.044636 4953 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.083686 4953 patch_prober.go:28] interesting pod/router-default-5444994796-p96zz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 00:09:14 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Feb 23 00:09:14 crc kubenswrapper[4953]: [+]process-running ok Feb 23 00:09:14 crc kubenswrapper[4953]: healthz check failed Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.083740 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p96zz" podUID="045ff803-aa45-4faa-b4ee-7f0de4093f04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.124782 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.125095 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b304dea-fc9b-4fc9-ae6a-6b8377e36578-utilities\") pod \"redhat-operators-pp2fr\" (UID: \"8b304dea-fc9b-4fc9-ae6a-6b8377e36578\") " pod="openshift-marketplace/redhat-operators-pp2fr" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.125193 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wn22\" (UniqueName: \"kubernetes.io/projected/8b304dea-fc9b-4fc9-ae6a-6b8377e36578-kube-api-access-9wn22\") pod \"redhat-operators-pp2fr\" (UID: \"8b304dea-fc9b-4fc9-ae6a-6b8377e36578\") " pod="openshift-marketplace/redhat-operators-pp2fr" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.125254 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b304dea-fc9b-4fc9-ae6a-6b8377e36578-catalog-content\") pod \"redhat-operators-pp2fr\" (UID: \"8b304dea-fc9b-4fc9-ae6a-6b8377e36578\") " pod="openshift-marketplace/redhat-operators-pp2fr" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.154820 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.197526 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.197759 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530080-tpjsc" event={"ID":"cb671439-af50-4c0d-8f7c-d92b3571b2b0","Type":"ContainerDied","Data":"8b0a104ad669a7cdf0d9725406e4f0996a98f3b5ddbde5e6cc61dc34263ce660"} Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.197802 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b0a104ad669a7cdf0d9725406e4f0996a98f3b5ddbde5e6cc61dc34263ce660" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.203922 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7fhk" event={"ID":"aec579bf-d684-4ab1-a91c-366365920404","Type":"ContainerStarted","Data":"d094bfe3bf464e56481491c77b33d8dfe923903ba87d1cd505c4add7bd43f81d"} Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.228926 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wn22\" (UniqueName: \"kubernetes.io/projected/8b304dea-fc9b-4fc9-ae6a-6b8377e36578-kube-api-access-9wn22\") pod \"redhat-operators-pp2fr\" (UID: \"8b304dea-fc9b-4fc9-ae6a-6b8377e36578\") " pod="openshift-marketplace/redhat-operators-pp2fr" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.229376 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b304dea-fc9b-4fc9-ae6a-6b8377e36578-catalog-content\") pod \"redhat-operators-pp2fr\" (UID: \"8b304dea-fc9b-4fc9-ae6a-6b8377e36578\") " pod="openshift-marketplace/redhat-operators-pp2fr" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.229400 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b304dea-fc9b-4fc9-ae6a-6b8377e36578-utilities\") pod \"redhat-operators-pp2fr\" (UID: \"8b304dea-fc9b-4fc9-ae6a-6b8377e36578\") " pod="openshift-marketplace/redhat-operators-pp2fr" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.229435 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.232105 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b304dea-fc9b-4fc9-ae6a-6b8377e36578-catalog-content\") pod \"redhat-operators-pp2fr\" (UID: \"8b304dea-fc9b-4fc9-ae6a-6b8377e36578\") " pod="openshift-marketplace/redhat-operators-pp2fr" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.232132 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b304dea-fc9b-4fc9-ae6a-6b8377e36578-utilities\") pod \"redhat-operators-pp2fr\" (UID: \"8b304dea-fc9b-4fc9-ae6a-6b8377e36578\") " pod="openshift-marketplace/redhat-operators-pp2fr" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.248567 4953 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.248605 4953 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.278627 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wn22\" (UniqueName: \"kubernetes.io/projected/8b304dea-fc9b-4fc9-ae6a-6b8377e36578-kube-api-access-9wn22\") pod \"redhat-operators-pp2fr\" (UID: \"8b304dea-fc9b-4fc9-ae6a-6b8377e36578\") " pod="openshift-marketplace/redhat-operators-pp2fr" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.300734 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwcpz\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.338191 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pp2fr" Feb 23 00:09:14 crc kubenswrapper[4953]: E0223 00:09:14.340821 4953 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaec579bf_d684_4ab1_a91c_366365920404.slice/crio-conmon-9671adb1833e91aa36b356a244171b894e69c03113c673c44d01fb7f5307ff55.scope\": RecentStats: unable to find data in memory cache]" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.368132 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m97ns"] Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.369652 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m97ns" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.387259 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m97ns"] Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.410701 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.432670 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e69edf3-0e86-4ae1-ad9c-a887d52fb655-catalog-content\") pod \"redhat-operators-m97ns\" (UID: \"5e69edf3-0e86-4ae1-ad9c-a887d52fb655\") " pod="openshift-marketplace/redhat-operators-m97ns" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.432725 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsdvt\" (UniqueName: \"kubernetes.io/projected/5e69edf3-0e86-4ae1-ad9c-a887d52fb655-kube-api-access-dsdvt\") pod \"redhat-operators-m97ns\" (UID: \"5e69edf3-0e86-4ae1-ad9c-a887d52fb655\") " pod="openshift-marketplace/redhat-operators-m97ns" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.432793 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e69edf3-0e86-4ae1-ad9c-a887d52fb655-utilities\") pod \"redhat-operators-m97ns\" (UID: \"5e69edf3-0e86-4ae1-ad9c-a887d52fb655\") " pod="openshift-marketplace/redhat-operators-m97ns" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.453165 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bt7vz"] Feb 23 00:09:14 crc kubenswrapper[4953]: W0223 00:09:14.477086 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod878d6a76_f92a_46cc_aaad_4b4a2fba9574.slice/crio-7a57ee74f30d292512b0ef3d4d73fa0fc8e49242ce113feb6d17e094f3c7b8e1 WatchSource:0}: Error finding container 7a57ee74f30d292512b0ef3d4d73fa0fc8e49242ce113feb6d17e094f3c7b8e1: Status 404 returned error can't find the container with id 7a57ee74f30d292512b0ef3d4d73fa0fc8e49242ce113feb6d17e094f3c7b8e1 Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.534563 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e69edf3-0e86-4ae1-ad9c-a887d52fb655-catalog-content\") pod \"redhat-operators-m97ns\" (UID: \"5e69edf3-0e86-4ae1-ad9c-a887d52fb655\") " pod="openshift-marketplace/redhat-operators-m97ns" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.534833 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsdvt\" (UniqueName: \"kubernetes.io/projected/5e69edf3-0e86-4ae1-ad9c-a887d52fb655-kube-api-access-dsdvt\") pod \"redhat-operators-m97ns\" (UID: \"5e69edf3-0e86-4ae1-ad9c-a887d52fb655\") " pod="openshift-marketplace/redhat-operators-m97ns" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.534871 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e69edf3-0e86-4ae1-ad9c-a887d52fb655-utilities\") pod \"redhat-operators-m97ns\" (UID: \"5e69edf3-0e86-4ae1-ad9c-a887d52fb655\") " pod="openshift-marketplace/redhat-operators-m97ns" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.535192 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e69edf3-0e86-4ae1-ad9c-a887d52fb655-catalog-content\") pod \"redhat-operators-m97ns\" (UID: \"5e69edf3-0e86-4ae1-ad9c-a887d52fb655\") " pod="openshift-marketplace/redhat-operators-m97ns" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.536489 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e69edf3-0e86-4ae1-ad9c-a887d52fb655-utilities\") pod \"redhat-operators-m97ns\" (UID: \"5e69edf3-0e86-4ae1-ad9c-a887d52fb655\") " pod="openshift-marketplace/redhat-operators-m97ns" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.573806 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsdvt\" (UniqueName: \"kubernetes.io/projected/5e69edf3-0e86-4ae1-ad9c-a887d52fb655-kube-api-access-dsdvt\") pod \"redhat-operators-m97ns\" (UID: \"5e69edf3-0e86-4ae1-ad9c-a887d52fb655\") " pod="openshift-marketplace/redhat-operators-m97ns" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.699933 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.700394 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.711935 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pp2fr"] Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.715372 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m97ns" Feb 23 00:09:14 crc kubenswrapper[4953]: W0223 00:09:14.738116 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b304dea_fc9b_4fc9_ae6a_6b8377e36578.slice/crio-77fed5202a0fbab6f1ab4d439c5283fc7dbd6446c61163ba95e5329a70c77445 WatchSource:0}: Error finding container 77fed5202a0fbab6f1ab4d439c5283fc7dbd6446c61163ba95e5329a70c77445: Status 404 returned error can't find the container with id 77fed5202a0fbab6f1ab4d439c5283fc7dbd6446c61163ba95e5329a70c77445 Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.964800 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vwcpz"] Feb 23 00:09:14 crc kubenswrapper[4953]: I0223 00:09:14.997191 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m97ns"] Feb 23 00:09:15 crc kubenswrapper[4953]: W0223 00:09:15.024725 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e69edf3_0e86_4ae1_ad9c_a887d52fb655.slice/crio-6ae50be8fd4d9f937619a46c0e18a7ab7f33efab9fada8c2d5314409ec05fae9 WatchSource:0}: Error finding container 6ae50be8fd4d9f937619a46c0e18a7ab7f33efab9fada8c2d5314409ec05fae9: Status 404 returned error can't find the container with id 6ae50be8fd4d9f937619a46c0e18a7ab7f33efab9fada8c2d5314409ec05fae9 Feb 23 00:09:15 crc kubenswrapper[4953]: W0223 00:09:15.031678 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f0a47df_1b8c_4e49_bbd7_1c55b257f918.slice/crio-6abf5b9ad444d5e8cffddc69867891c3785d1daa7f6e6ed6b2692e7956da574a WatchSource:0}: Error finding container 6abf5b9ad444d5e8cffddc69867891c3785d1daa7f6e6ed6b2692e7956da574a: Status 404 returned error can't find the container with id 6abf5b9ad444d5e8cffddc69867891c3785d1daa7f6e6ed6b2692e7956da574a Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.079991 4953 patch_prober.go:28] interesting pod/router-default-5444994796-p96zz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 00:09:15 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Feb 23 00:09:15 crc kubenswrapper[4953]: [+]process-running ok Feb 23 00:09:15 crc kubenswrapper[4953]: healthz check failed Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.080045 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p96zz" podUID="045ff803-aa45-4faa-b4ee-7f0de4093f04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.103025 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.103867 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.105666 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.106843 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.110107 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.215891 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"fbb9f78017f11b426ed64837ab24bc547ab8146a54393e3611c9d2f7797384eb"} Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.216400 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5c788b9ddde346227e70e6a05b26a3d66e3eeccf138003de2a54eb9e6b747da8"} Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.223737 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e8f0477aa1ae08e37dfdfe4a778b118790e6585c4ff0f04384983c720c79b05c"} Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.223785 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a4897001fe25b49ecc8cc3d08fd8009b7b2b8690b7831951bf4a8bfdac9ff461"} Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.254555 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72ff22ff-a12d-409a-9c01-9c83064f7b6d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"72ff22ff-a12d-409a-9c01-9c83064f7b6d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.254602 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72ff22ff-a12d-409a-9c01-9c83064f7b6d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"72ff22ff-a12d-409a-9c01-9c83064f7b6d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.275519 4953 generic.go:334] "Generic (PLEG): container finished" podID="878d6a76-f92a-46cc-aaad-4b4a2fba9574" containerID="299d2f76a7c30bcb6b505ed1d70ca0516ebf6b43c08f5ef4f39028dcb0f5e7a0" exitCode=0 Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.275614 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bt7vz" event={"ID":"878d6a76-f92a-46cc-aaad-4b4a2fba9574","Type":"ContainerDied","Data":"299d2f76a7c30bcb6b505ed1d70ca0516ebf6b43c08f5ef4f39028dcb0f5e7a0"} Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.275641 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bt7vz" event={"ID":"878d6a76-f92a-46cc-aaad-4b4a2fba9574","Type":"ContainerStarted","Data":"7a57ee74f30d292512b0ef3d4d73fa0fc8e49242ce113feb6d17e094f3c7b8e1"} Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.279453 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7591e5810b32ab474dafbaac8738372b00305c340155a437359a69a8ed13437e"} Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.279499 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8f148739275d8a3561a98c86a9cb2e5779b143f5b17cbb47ed6dd19703b83564"} Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.279684 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.287833 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.288748 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.291155 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.291659 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" event={"ID":"0f0a47df-1b8c-4e49-bbd7-1c55b257f918","Type":"ContainerStarted","Data":"6abf5b9ad444d5e8cffddc69867891c3785d1daa7f6e6ed6b2692e7956da574a"} Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.291726 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.297655 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.302597 4953 generic.go:334] "Generic (PLEG): container finished" podID="aec579bf-d684-4ab1-a91c-366365920404" containerID="9671adb1833e91aa36b356a244171b894e69c03113c673c44d01fb7f5307ff55" exitCode=0 Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.302698 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7fhk" event={"ID":"aec579bf-d684-4ab1-a91c-366365920404","Type":"ContainerDied","Data":"9671adb1833e91aa36b356a244171b894e69c03113c673c44d01fb7f5307ff55"} Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.342189 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.343007 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m97ns" event={"ID":"5e69edf3-0e86-4ae1-ad9c-a887d52fb655","Type":"ContainerStarted","Data":"6ae50be8fd4d9f937619a46c0e18a7ab7f33efab9fada8c2d5314409ec05fae9"} Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.351412 4953 generic.go:334] "Generic (PLEG): container finished" podID="8b304dea-fc9b-4fc9-ae6a-6b8377e36578" containerID="6cc286cbb00ea26049cd1e06b10e35522076717cc4692054a67919df776bb055" exitCode=0 Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.351447 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pp2fr" event={"ID":"8b304dea-fc9b-4fc9-ae6a-6b8377e36578","Type":"ContainerDied","Data":"6cc286cbb00ea26049cd1e06b10e35522076717cc4692054a67919df776bb055"} Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.351471 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pp2fr" event={"ID":"8b304dea-fc9b-4fc9-ae6a-6b8377e36578","Type":"ContainerStarted","Data":"77fed5202a0fbab6f1ab4d439c5283fc7dbd6446c61163ba95e5329a70c77445"} Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.363095 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72ff22ff-a12d-409a-9c01-9c83064f7b6d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"72ff22ff-a12d-409a-9c01-9c83064f7b6d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.363272 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c31fc79b-a9ab-444e-b6b4-d267134f360c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c31fc79b-a9ab-444e-b6b4-d267134f360c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.363307 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c31fc79b-a9ab-444e-b6b4-d267134f360c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c31fc79b-a9ab-444e-b6b4-d267134f360c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.363342 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72ff22ff-a12d-409a-9c01-9c83064f7b6d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"72ff22ff-a12d-409a-9c01-9c83064f7b6d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.363418 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72ff22ff-a12d-409a-9c01-9c83064f7b6d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"72ff22ff-a12d-409a-9c01-9c83064f7b6d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.430236 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72ff22ff-a12d-409a-9c01-9c83064f7b6d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"72ff22ff-a12d-409a-9c01-9c83064f7b6d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.440372 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.464598 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c31fc79b-a9ab-444e-b6b4-d267134f360c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c31fc79b-a9ab-444e-b6b4-d267134f360c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.464631 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c31fc79b-a9ab-444e-b6b4-d267134f360c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c31fc79b-a9ab-444e-b6b4-d267134f360c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.465441 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c31fc79b-a9ab-444e-b6b4-d267134f360c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c31fc79b-a9ab-444e-b6b4-d267134f360c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.496588 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c31fc79b-a9ab-444e-b6b4-d267134f360c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c31fc79b-a9ab-444e-b6b4-d267134f360c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.659449 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.837084 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 23 00:09:15 crc kubenswrapper[4953]: I0223 00:09:15.950015 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.012134 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.013712 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.021433 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.021512 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.032067 4953 patch_prober.go:28] interesting pod/console-f9d7485db-sbhgq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.032167 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sbhgq" podUID="4d7fd8ab-12ef-4686-887c-3f4acbb5a30b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.043035 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.075775 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.086409 4953 patch_prober.go:28] interesting pod/router-default-5444994796-p96zz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 00:09:16 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Feb 23 00:09:16 crc kubenswrapper[4953]: [+]process-running ok Feb 23 00:09:16 crc kubenswrapper[4953]: healthz check failed Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.086475 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p96zz" podUID="045ff803-aa45-4faa-b4ee-7f0de4093f04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.405129 4953 generic.go:334] "Generic (PLEG): container finished" podID="5e69edf3-0e86-4ae1-ad9c-a887d52fb655" containerID="aabc05c33f90f4aed082dc155f0439b9a9f940141f94adc0d279b3e4d1f13e6b" exitCode=0 Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.405335 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m97ns" event={"ID":"5e69edf3-0e86-4ae1-ad9c-a887d52fb655","Type":"ContainerDied","Data":"aabc05c33f90f4aed082dc155f0439b9a9f940141f94adc0d279b3e4d1f13e6b"} Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.407862 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c31fc79b-a9ab-444e-b6b4-d267134f360c","Type":"ContainerStarted","Data":"472f184bf5db9975a22d5f5d7648d3140c751959757bc8a1ad9ed68b13c46d26"} Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.413485 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"72ff22ff-a12d-409a-9c01-9c83064f7b6d","Type":"ContainerStarted","Data":"693aab806e500207f8fe7707f143c36517a313cd1bca2c6b52d818057e4326b7"} Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.418222 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" event={"ID":"0f0a47df-1b8c-4e49-bbd7-1c55b257f918","Type":"ContainerStarted","Data":"1f8f2479e20789250533effb5c56af5671befcee720324d26f5454dcde5235a9"} Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.418694 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.429923 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-f6ptk" Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.438554 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" podStartSLOduration=131.438537686 podStartE2EDuration="2m11.438537686s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:16.43830518 +0000 UTC m=+154.372147046" watchObservedRunningTime="2026-02-23 00:09:16.438537686 +0000 UTC m=+154.372379532" Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.480263 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-blqz2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.480338 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-blqz2" podUID="ef7c633a-9dcf-4613-9106-3c1f07a6afab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.484762 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-blqz2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.484830 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-blqz2" podUID="ef7c633a-9dcf-4613-9106-3c1f07a6afab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 23 00:09:16 crc kubenswrapper[4953]: I0223 00:09:16.519007 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" Feb 23 00:09:17 crc kubenswrapper[4953]: I0223 00:09:17.084949 4953 patch_prober.go:28] interesting pod/router-default-5444994796-p96zz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 00:09:17 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Feb 23 00:09:17 crc kubenswrapper[4953]: [+]process-running ok Feb 23 00:09:17 crc kubenswrapper[4953]: healthz check failed Feb 23 00:09:17 crc kubenswrapper[4953]: I0223 00:09:17.086093 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p96zz" podUID="045ff803-aa45-4faa-b4ee-7f0de4093f04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 00:09:17 crc kubenswrapper[4953]: I0223 00:09:17.455405 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c31fc79b-a9ab-444e-b6b4-d267134f360c","Type":"ContainerStarted","Data":"9fbe01038ca4d0cc7d9fa05c9234a0c974c927eea1627d388a36cf797f221b8e"} Feb 23 00:09:17 crc kubenswrapper[4953]: I0223 00:09:17.478259 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.47824102 podStartE2EDuration="2.47824102s" podCreationTimestamp="2026-02-23 00:09:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:17.466920992 +0000 UTC m=+155.400762828" watchObservedRunningTime="2026-02-23 00:09:17.47824102 +0000 UTC m=+155.412082866" Feb 23 00:09:17 crc kubenswrapper[4953]: I0223 00:09:17.481334 4953 generic.go:334] "Generic (PLEG): container finished" podID="72ff22ff-a12d-409a-9c01-9c83064f7b6d" containerID="700837a44a2af95494bc08550f29f1284b4de6e6e850b4724c2ba2bcadea5379" exitCode=0 Feb 23 00:09:17 crc kubenswrapper[4953]: I0223 00:09:17.482076 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"72ff22ff-a12d-409a-9c01-9c83064f7b6d","Type":"ContainerDied","Data":"700837a44a2af95494bc08550f29f1284b4de6e6e850b4724c2ba2bcadea5379"} Feb 23 00:09:18 crc kubenswrapper[4953]: I0223 00:09:18.079153 4953 patch_prober.go:28] interesting pod/router-default-5444994796-p96zz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 00:09:18 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Feb 23 00:09:18 crc kubenswrapper[4953]: [+]process-running ok Feb 23 00:09:18 crc kubenswrapper[4953]: healthz check failed Feb 23 00:09:18 crc kubenswrapper[4953]: I0223 00:09:18.079230 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p96zz" podUID="045ff803-aa45-4faa-b4ee-7f0de4093f04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 00:09:18 crc kubenswrapper[4953]: I0223 00:09:18.498563 4953 generic.go:334] "Generic (PLEG): container finished" podID="c31fc79b-a9ab-444e-b6b4-d267134f360c" containerID="9fbe01038ca4d0cc7d9fa05c9234a0c974c927eea1627d388a36cf797f221b8e" exitCode=0 Feb 23 00:09:18 crc kubenswrapper[4953]: I0223 00:09:18.498789 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c31fc79b-a9ab-444e-b6b4-d267134f360c","Type":"ContainerDied","Data":"9fbe01038ca4d0cc7d9fa05c9234a0c974c927eea1627d388a36cf797f221b8e"} Feb 23 00:09:18 crc kubenswrapper[4953]: I0223 00:09:18.865846 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 00:09:18 crc kubenswrapper[4953]: I0223 00:09:18.993592 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72ff22ff-a12d-409a-9c01-9c83064f7b6d-kube-api-access\") pod \"72ff22ff-a12d-409a-9c01-9c83064f7b6d\" (UID: \"72ff22ff-a12d-409a-9c01-9c83064f7b6d\") " Feb 23 00:09:18 crc kubenswrapper[4953]: I0223 00:09:18.993702 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72ff22ff-a12d-409a-9c01-9c83064f7b6d-kubelet-dir\") pod \"72ff22ff-a12d-409a-9c01-9c83064f7b6d\" (UID: \"72ff22ff-a12d-409a-9c01-9c83064f7b6d\") " Feb 23 00:09:18 crc kubenswrapper[4953]: I0223 00:09:18.993902 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72ff22ff-a12d-409a-9c01-9c83064f7b6d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "72ff22ff-a12d-409a-9c01-9c83064f7b6d" (UID: "72ff22ff-a12d-409a-9c01-9c83064f7b6d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:09:18 crc kubenswrapper[4953]: I0223 00:09:18.999044 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72ff22ff-a12d-409a-9c01-9c83064f7b6d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "72ff22ff-a12d-409a-9c01-9c83064f7b6d" (UID: "72ff22ff-a12d-409a-9c01-9c83064f7b6d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:09:19 crc kubenswrapper[4953]: I0223 00:09:19.078059 4953 patch_prober.go:28] interesting pod/router-default-5444994796-p96zz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 00:09:19 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Feb 23 00:09:19 crc kubenswrapper[4953]: [+]process-running ok Feb 23 00:09:19 crc kubenswrapper[4953]: healthz check failed Feb 23 00:09:19 crc kubenswrapper[4953]: I0223 00:09:19.078136 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p96zz" podUID="045ff803-aa45-4faa-b4ee-7f0de4093f04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 00:09:19 crc kubenswrapper[4953]: I0223 00:09:19.095406 4953 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72ff22ff-a12d-409a-9c01-9c83064f7b6d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 00:09:19 crc kubenswrapper[4953]: I0223 00:09:19.095445 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72ff22ff-a12d-409a-9c01-9c83064f7b6d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 00:09:19 crc kubenswrapper[4953]: I0223 00:09:19.511700 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"72ff22ff-a12d-409a-9c01-9c83064f7b6d","Type":"ContainerDied","Data":"693aab806e500207f8fe7707f143c36517a313cd1bca2c6b52d818057e4326b7"} Feb 23 00:09:19 crc kubenswrapper[4953]: I0223 00:09:19.511751 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="693aab806e500207f8fe7707f143c36517a313cd1bca2c6b52d818057e4326b7" Feb 23 00:09:19 crc kubenswrapper[4953]: I0223 00:09:19.511770 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 00:09:19 crc kubenswrapper[4953]: I0223 00:09:19.998829 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 00:09:20 crc kubenswrapper[4953]: I0223 00:09:20.085270 4953 patch_prober.go:28] interesting pod/router-default-5444994796-p96zz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 00:09:20 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Feb 23 00:09:20 crc kubenswrapper[4953]: [+]process-running ok Feb 23 00:09:20 crc kubenswrapper[4953]: healthz check failed Feb 23 00:09:20 crc kubenswrapper[4953]: I0223 00:09:20.085371 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p96zz" podUID="045ff803-aa45-4faa-b4ee-7f0de4093f04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 00:09:20 crc kubenswrapper[4953]: I0223 00:09:20.112552 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c31fc79b-a9ab-444e-b6b4-d267134f360c-kube-api-access\") pod \"c31fc79b-a9ab-444e-b6b4-d267134f360c\" (UID: \"c31fc79b-a9ab-444e-b6b4-d267134f360c\") " Feb 23 00:09:20 crc kubenswrapper[4953]: I0223 00:09:20.112649 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c31fc79b-a9ab-444e-b6b4-d267134f360c-kubelet-dir\") pod \"c31fc79b-a9ab-444e-b6b4-d267134f360c\" (UID: \"c31fc79b-a9ab-444e-b6b4-d267134f360c\") " Feb 23 00:09:20 crc kubenswrapper[4953]: I0223 00:09:20.112966 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c31fc79b-a9ab-444e-b6b4-d267134f360c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c31fc79b-a9ab-444e-b6b4-d267134f360c" (UID: "c31fc79b-a9ab-444e-b6b4-d267134f360c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:09:20 crc kubenswrapper[4953]: I0223 00:09:20.143574 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c31fc79b-a9ab-444e-b6b4-d267134f360c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c31fc79b-a9ab-444e-b6b4-d267134f360c" (UID: "c31fc79b-a9ab-444e-b6b4-d267134f360c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:09:20 crc kubenswrapper[4953]: I0223 00:09:20.233700 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c31fc79b-a9ab-444e-b6b4-d267134f360c-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 00:09:20 crc kubenswrapper[4953]: I0223 00:09:20.233745 4953 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c31fc79b-a9ab-444e-b6b4-d267134f360c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 00:09:20 crc kubenswrapper[4953]: I0223 00:09:20.526401 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c31fc79b-a9ab-444e-b6b4-d267134f360c","Type":"ContainerDied","Data":"472f184bf5db9975a22d5f5d7648d3140c751959757bc8a1ad9ed68b13c46d26"} Feb 23 00:09:20 crc kubenswrapper[4953]: I0223 00:09:20.526461 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="472f184bf5db9975a22d5f5d7648d3140c751959757bc8a1ad9ed68b13c46d26" Feb 23 00:09:20 crc kubenswrapper[4953]: I0223 00:09:20.526536 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 00:09:21 crc kubenswrapper[4953]: I0223 00:09:21.078584 4953 patch_prober.go:28] interesting pod/router-default-5444994796-p96zz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 00:09:21 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Feb 23 00:09:21 crc kubenswrapper[4953]: [+]process-running ok Feb 23 00:09:21 crc kubenswrapper[4953]: healthz check failed Feb 23 00:09:21 crc kubenswrapper[4953]: I0223 00:09:21.078656 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p96zz" podUID="045ff803-aa45-4faa-b4ee-7f0de4093f04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 00:09:21 crc kubenswrapper[4953]: I0223 00:09:21.573225 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-vr4sx" Feb 23 00:09:22 crc kubenswrapper[4953]: I0223 00:09:22.078128 4953 patch_prober.go:28] interesting pod/router-default-5444994796-p96zz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 00:09:22 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Feb 23 00:09:22 crc kubenswrapper[4953]: [+]process-running ok Feb 23 00:09:22 crc kubenswrapper[4953]: healthz check failed Feb 23 00:09:22 crc kubenswrapper[4953]: I0223 00:09:22.078490 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p96zz" podUID="045ff803-aa45-4faa-b4ee-7f0de4093f04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 00:09:23 crc kubenswrapper[4953]: I0223 00:09:23.078649 4953 patch_prober.go:28] interesting pod/router-default-5444994796-p96zz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 00:09:23 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Feb 23 00:09:23 crc kubenswrapper[4953]: [+]process-running ok Feb 23 00:09:23 crc kubenswrapper[4953]: healthz check failed Feb 23 00:09:23 crc kubenswrapper[4953]: I0223 00:09:23.078719 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p96zz" podUID="045ff803-aa45-4faa-b4ee-7f0de4093f04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 00:09:24 crc kubenswrapper[4953]: I0223 00:09:24.078078 4953 patch_prober.go:28] interesting pod/router-default-5444994796-p96zz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 00:09:24 crc kubenswrapper[4953]: [-]has-synced failed: reason withheld Feb 23 00:09:24 crc kubenswrapper[4953]: [+]process-running ok Feb 23 00:09:24 crc kubenswrapper[4953]: healthz check failed Feb 23 00:09:24 crc kubenswrapper[4953]: I0223 00:09:24.078460 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p96zz" podUID="045ff803-aa45-4faa-b4ee-7f0de4093f04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 00:09:25 crc kubenswrapper[4953]: I0223 00:09:25.080434 4953 patch_prober.go:28] interesting pod/router-default-5444994796-p96zz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 00:09:25 crc kubenswrapper[4953]: [+]has-synced ok Feb 23 00:09:25 crc kubenswrapper[4953]: [+]process-running ok Feb 23 00:09:25 crc kubenswrapper[4953]: healthz check failed Feb 23 00:09:25 crc kubenswrapper[4953]: I0223 00:09:25.080814 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p96zz" podUID="045ff803-aa45-4faa-b4ee-7f0de4093f04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 00:09:26 crc kubenswrapper[4953]: I0223 00:09:26.021143 4953 patch_prober.go:28] interesting pod/console-f9d7485db-sbhgq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 23 00:09:26 crc kubenswrapper[4953]: I0223 00:09:26.021203 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sbhgq" podUID="4d7fd8ab-12ef-4686-887c-3f4acbb5a30b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 23 00:09:26 crc kubenswrapper[4953]: I0223 00:09:26.077245 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:26 crc kubenswrapper[4953]: I0223 00:09:26.079818 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-p96zz" Feb 23 00:09:26 crc kubenswrapper[4953]: I0223 00:09:26.479539 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-blqz2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 23 00:09:26 crc kubenswrapper[4953]: I0223 00:09:26.479593 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-blqz2" podUID="ef7c633a-9dcf-4613-9106-3c1f07a6afab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 23 00:09:26 crc kubenswrapper[4953]: I0223 00:09:26.479621 4953 patch_prober.go:28] interesting pod/downloads-7954f5f757-blqz2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Feb 23 00:09:26 crc kubenswrapper[4953]: I0223 00:09:26.479705 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-blqz2" podUID="ef7c633a-9dcf-4613-9106-3c1f07a6afab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Feb 23 00:09:28 crc kubenswrapper[4953]: I0223 00:09:28.261375 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs\") pod \"network-metrics-daemon-wppgs\" (UID: \"71837ac6-9a75-4640-af98-633ccdd09e20\") " pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:09:28 crc kubenswrapper[4953]: I0223 00:09:28.271008 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71837ac6-9a75-4640-af98-633ccdd09e20-metrics-certs\") pod \"network-metrics-daemon-wppgs\" (UID: \"71837ac6-9a75-4640-af98-633ccdd09e20\") " pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:09:28 crc kubenswrapper[4953]: I0223 00:09:28.549765 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wppgs" Feb 23 00:09:34 crc kubenswrapper[4953]: I0223 00:09:34.418355 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:09:36 crc kubenswrapper[4953]: I0223 00:09:36.024786 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:36 crc kubenswrapper[4953]: I0223 00:09:36.028345 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-sbhgq" Feb 23 00:09:36 crc kubenswrapper[4953]: I0223 00:09:36.485251 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-blqz2" Feb 23 00:09:40 crc kubenswrapper[4953]: I0223 00:09:40.684153 4953 generic.go:334] "Generic (PLEG): container finished" podID="2c463426-aee8-41c2-8f08-e553efa4742a" containerID="399ed9bd77b6f92a7de534442e0d4cf1536d9369a003293a9ab2154aeaf5803e" exitCode=0 Feb 23 00:09:40 crc kubenswrapper[4953]: I0223 00:09:40.684252 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29530080-nw6hr" event={"ID":"2c463426-aee8-41c2-8f08-e553efa4742a","Type":"ContainerDied","Data":"399ed9bd77b6f92a7de534442e0d4cf1536d9369a003293a9ab2154aeaf5803e"} Feb 23 00:09:44 crc kubenswrapper[4953]: I0223 00:09:44.699576 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:09:44 crc kubenswrapper[4953]: I0223 00:09:44.700210 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:09:45 crc kubenswrapper[4953]: I0223 00:09:45.803274 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rzc24" Feb 23 00:09:46 crc kubenswrapper[4953]: I0223 00:09:46.722328 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29530080-nw6hr" event={"ID":"2c463426-aee8-41c2-8f08-e553efa4742a","Type":"ContainerDied","Data":"b912855d21e7aaf95e1c76286411647c1184d2904ec6cdaa385e306b37fc8c89"} Feb 23 00:09:46 crc kubenswrapper[4953]: I0223 00:09:46.722721 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b912855d21e7aaf95e1c76286411647c1184d2904ec6cdaa385e306b37fc8c89" Feb 23 00:09:46 crc kubenswrapper[4953]: E0223 00:09:46.726299 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 23 00:09:46 crc kubenswrapper[4953]: E0223 00:09:46.726450 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wn22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pp2fr_openshift-marketplace(8b304dea-fc9b-4fc9-ae6a-6b8377e36578): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 00:09:46 crc kubenswrapper[4953]: E0223 00:09:46.727517 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-pp2fr" podUID="8b304dea-fc9b-4fc9-ae6a-6b8377e36578" Feb 23 00:09:46 crc kubenswrapper[4953]: I0223 00:09:46.743575 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29530080-nw6hr" Feb 23 00:09:46 crc kubenswrapper[4953]: I0223 00:09:46.815395 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2c463426-aee8-41c2-8f08-e553efa4742a-serviceca\") pod \"2c463426-aee8-41c2-8f08-e553efa4742a\" (UID: \"2c463426-aee8-41c2-8f08-e553efa4742a\") " Feb 23 00:09:46 crc kubenswrapper[4953]: I0223 00:09:46.815585 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qqxq\" (UniqueName: \"kubernetes.io/projected/2c463426-aee8-41c2-8f08-e553efa4742a-kube-api-access-4qqxq\") pod \"2c463426-aee8-41c2-8f08-e553efa4742a\" (UID: \"2c463426-aee8-41c2-8f08-e553efa4742a\") " Feb 23 00:09:46 crc kubenswrapper[4953]: I0223 00:09:46.816508 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c463426-aee8-41c2-8f08-e553efa4742a-serviceca" (OuterVolumeSpecName: "serviceca") pod "2c463426-aee8-41c2-8f08-e553efa4742a" (UID: "2c463426-aee8-41c2-8f08-e553efa4742a"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:09:46 crc kubenswrapper[4953]: E0223 00:09:46.817213 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 23 00:09:46 crc kubenswrapper[4953]: E0223 00:09:46.817361 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ncxvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-44v2s_openshift-marketplace(b82e6868-6070-4c8b-9564-c7f0ae98c951): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 00:09:46 crc kubenswrapper[4953]: E0223 00:09:46.818437 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-44v2s" podUID="b82e6868-6070-4c8b-9564-c7f0ae98c951" Feb 23 00:09:46 crc kubenswrapper[4953]: I0223 00:09:46.824957 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c463426-aee8-41c2-8f08-e553efa4742a-kube-api-access-4qqxq" (OuterVolumeSpecName: "kube-api-access-4qqxq") pod "2c463426-aee8-41c2-8f08-e553efa4742a" (UID: "2c463426-aee8-41c2-8f08-e553efa4742a"). InnerVolumeSpecName "kube-api-access-4qqxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:09:46 crc kubenswrapper[4953]: I0223 00:09:46.917811 4953 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2c463426-aee8-41c2-8f08-e553efa4742a-serviceca\") on node \"crc\" DevicePath \"\"" Feb 23 00:09:46 crc kubenswrapper[4953]: I0223 00:09:46.917870 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qqxq\" (UniqueName: \"kubernetes.io/projected/2c463426-aee8-41c2-8f08-e553efa4742a-kube-api-access-4qqxq\") on node \"crc\" DevicePath \"\"" Feb 23 00:09:47 crc kubenswrapper[4953]: I0223 00:09:47.126524 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wppgs"] Feb 23 00:09:47 crc kubenswrapper[4953]: W0223 00:09:47.133890 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71837ac6_9a75_4640_af98_633ccdd09e20.slice/crio-4ca0073ba1a93993cf7c138c90056e13d2db5eb29efa84a3ef10d6cc41316af0 WatchSource:0}: Error finding container 4ca0073ba1a93993cf7c138c90056e13d2db5eb29efa84a3ef10d6cc41316af0: Status 404 returned error can't find the container with id 4ca0073ba1a93993cf7c138c90056e13d2db5eb29efa84a3ef10d6cc41316af0 Feb 23 00:09:47 crc kubenswrapper[4953]: I0223 00:09:47.728174 4953 generic.go:334] "Generic (PLEG): container finished" podID="878d6a76-f92a-46cc-aaad-4b4a2fba9574" containerID="6a0f8b67871c3cddbb3e106b44e5b19dd6336ada2b18807b9f66d46ede150e5a" exitCode=0 Feb 23 00:09:47 crc kubenswrapper[4953]: I0223 00:09:47.728218 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bt7vz" event={"ID":"878d6a76-f92a-46cc-aaad-4b4a2fba9574","Type":"ContainerDied","Data":"6a0f8b67871c3cddbb3e106b44e5b19dd6336ada2b18807b9f66d46ede150e5a"} Feb 23 00:09:47 crc kubenswrapper[4953]: I0223 00:09:47.732262 4953 generic.go:334] "Generic (PLEG): container finished" podID="aec579bf-d684-4ab1-a91c-366365920404" containerID="eec5e9e7ecbcecfb9dd8c2781eac5629f3976ba990e234f1ad5883130e3fdb80" exitCode=0 Feb 23 00:09:47 crc kubenswrapper[4953]: I0223 00:09:47.732356 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7fhk" event={"ID":"aec579bf-d684-4ab1-a91c-366365920404","Type":"ContainerDied","Data":"eec5e9e7ecbcecfb9dd8c2781eac5629f3976ba990e234f1ad5883130e3fdb80"} Feb 23 00:09:47 crc kubenswrapper[4953]: I0223 00:09:47.737586 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m97ns" event={"ID":"5e69edf3-0e86-4ae1-ad9c-a887d52fb655","Type":"ContainerStarted","Data":"f143a670af65caf10b0f2e5f6894b473691091f03f9f8938224f6597505dede6"} Feb 23 00:09:47 crc kubenswrapper[4953]: I0223 00:09:47.739637 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wppgs" event={"ID":"71837ac6-9a75-4640-af98-633ccdd09e20","Type":"ContainerStarted","Data":"9a9fd405fff9e36022a4e0fea8ff28f4a60d0a9f5687bf64c0e19d39ca4f9a81"} Feb 23 00:09:47 crc kubenswrapper[4953]: I0223 00:09:47.739666 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wppgs" event={"ID":"71837ac6-9a75-4640-af98-633ccdd09e20","Type":"ContainerStarted","Data":"4ca0073ba1a93993cf7c138c90056e13d2db5eb29efa84a3ef10d6cc41316af0"} Feb 23 00:09:47 crc kubenswrapper[4953]: I0223 00:09:47.741647 4953 generic.go:334] "Generic (PLEG): container finished" podID="153fb1b2-654f-412d-8c1f-e4f6c48f967f" containerID="bdbc2043ce28728038111d9fdff3621675a2f972a4d3f8e14a875a0cb92147af" exitCode=0 Feb 23 00:09:47 crc kubenswrapper[4953]: I0223 00:09:47.741699 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q77kp" event={"ID":"153fb1b2-654f-412d-8c1f-e4f6c48f967f","Type":"ContainerDied","Data":"bdbc2043ce28728038111d9fdff3621675a2f972a4d3f8e14a875a0cb92147af"} Feb 23 00:09:47 crc kubenswrapper[4953]: I0223 00:09:47.742917 4953 generic.go:334] "Generic (PLEG): container finished" podID="90ff8dec-87f8-49c9-a006-8134bca4e36f" containerID="48552c0c685174d9023f6a74c696547ed96f246d9663f532ca80ebaea3c4c991" exitCode=0 Feb 23 00:09:47 crc kubenswrapper[4953]: I0223 00:09:47.742956 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfrxz" event={"ID":"90ff8dec-87f8-49c9-a006-8134bca4e36f","Type":"ContainerDied","Data":"48552c0c685174d9023f6a74c696547ed96f246d9663f532ca80ebaea3c4c991"} Feb 23 00:09:47 crc kubenswrapper[4953]: I0223 00:09:47.747461 4953 generic.go:334] "Generic (PLEG): container finished" podID="0033d07e-7400-4307-89d8-efc2e34acee5" containerID="81965963f5e67588e9731bd2acdc03249d719e6a9c021d20acd6f4f6185ea651" exitCode=0 Feb 23 00:09:47 crc kubenswrapper[4953]: I0223 00:09:47.749370 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65pkm" event={"ID":"0033d07e-7400-4307-89d8-efc2e34acee5","Type":"ContainerDied","Data":"81965963f5e67588e9731bd2acdc03249d719e6a9c021d20acd6f4f6185ea651"} Feb 23 00:09:47 crc kubenswrapper[4953]: I0223 00:09:47.749472 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29530080-nw6hr" Feb 23 00:09:47 crc kubenswrapper[4953]: E0223 00:09:47.759193 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pp2fr" podUID="8b304dea-fc9b-4fc9-ae6a-6b8377e36578" Feb 23 00:09:47 crc kubenswrapper[4953]: E0223 00:09:47.759280 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44v2s" podUID="b82e6868-6070-4c8b-9564-c7f0ae98c951" Feb 23 00:09:48 crc kubenswrapper[4953]: I0223 00:09:48.756755 4953 generic.go:334] "Generic (PLEG): container finished" podID="5e69edf3-0e86-4ae1-ad9c-a887d52fb655" containerID="f143a670af65caf10b0f2e5f6894b473691091f03f9f8938224f6597505dede6" exitCode=0 Feb 23 00:09:48 crc kubenswrapper[4953]: I0223 00:09:48.756823 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m97ns" event={"ID":"5e69edf3-0e86-4ae1-ad9c-a887d52fb655","Type":"ContainerDied","Data":"f143a670af65caf10b0f2e5f6894b473691091f03f9f8938224f6597505dede6"} Feb 23 00:09:48 crc kubenswrapper[4953]: I0223 00:09:48.760647 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wppgs" event={"ID":"71837ac6-9a75-4640-af98-633ccdd09e20","Type":"ContainerStarted","Data":"6421deded70565d4400484c5a4a5e58624a67a8398a887628c8e643ad1a7de25"} Feb 23 00:09:50 crc kubenswrapper[4953]: I0223 00:09:50.772412 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfrxz" event={"ID":"90ff8dec-87f8-49c9-a006-8134bca4e36f","Type":"ContainerStarted","Data":"c83b3132238c8ea82f50a14a7b3401e4b1bdd8cf652f81bfd5a7db34ddf31c69"} Feb 23 00:09:50 crc kubenswrapper[4953]: I0223 00:09:50.787852 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-wppgs" podStartSLOduration=165.787834985 podStartE2EDuration="2m45.787834985s" podCreationTimestamp="2026-02-23 00:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:48.794465994 +0000 UTC m=+186.728307850" watchObservedRunningTime="2026-02-23 00:09:50.787834985 +0000 UTC m=+188.721676831" Feb 23 00:09:50 crc kubenswrapper[4953]: I0223 00:09:50.788130 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bfrxz" podStartSLOduration=4.399290742 podStartE2EDuration="40.788126423s" podCreationTimestamp="2026-02-23 00:09:10 +0000 UTC" firstStartedPulling="2026-02-23 00:09:13.154861261 +0000 UTC m=+151.088703117" lastFinishedPulling="2026-02-23 00:09:49.543696952 +0000 UTC m=+187.477538798" observedRunningTime="2026-02-23 00:09:50.78692117 +0000 UTC m=+188.720763026" watchObservedRunningTime="2026-02-23 00:09:50.788126423 +0000 UTC m=+188.721968269" Feb 23 00:09:51 crc kubenswrapper[4953]: I0223 00:09:51.621581 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bfrxz" Feb 23 00:09:51 crc kubenswrapper[4953]: I0223 00:09:51.622118 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bfrxz" Feb 23 00:09:51 crc kubenswrapper[4953]: I0223 00:09:51.779341 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bt7vz" event={"ID":"878d6a76-f92a-46cc-aaad-4b4a2fba9574","Type":"ContainerStarted","Data":"31990e897b050f445b2a26ada477bdccd08b53cc7e475113aad9b3e90e3f0c06"} Feb 23 00:09:51 crc kubenswrapper[4953]: I0223 00:09:51.781994 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7fhk" event={"ID":"aec579bf-d684-4ab1-a91c-366365920404","Type":"ContainerStarted","Data":"26fa4c19f6ed79d4aa35a7d22f1ba3e5c67f69051bd72d04d3b7abb3a88005b1"} Feb 23 00:09:51 crc kubenswrapper[4953]: I0223 00:09:51.783878 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m97ns" event={"ID":"5e69edf3-0e86-4ae1-ad9c-a887d52fb655","Type":"ContainerStarted","Data":"95d89271c6b372abcfceb145d2f16efff72ed19478ab1eb850d394e93fb682bc"} Feb 23 00:09:51 crc kubenswrapper[4953]: I0223 00:09:51.786138 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q77kp" event={"ID":"153fb1b2-654f-412d-8c1f-e4f6c48f967f","Type":"ContainerStarted","Data":"e2207247cc61af4c8567aa56ada59ec07d7098d94719a14ab85220a22d83d9bc"} Feb 23 00:09:51 crc kubenswrapper[4953]: I0223 00:09:51.788189 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65pkm" event={"ID":"0033d07e-7400-4307-89d8-efc2e34acee5","Type":"ContainerStarted","Data":"6f0f7bd73e65d8ce905aeae1edb3eac29e7e3679dd1c5a87ff23298839e1a78c"} Feb 23 00:09:51 crc kubenswrapper[4953]: I0223 00:09:51.798959 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bt7vz" podStartSLOduration=2.949215154 podStartE2EDuration="38.798940391s" podCreationTimestamp="2026-02-23 00:09:13 +0000 UTC" firstStartedPulling="2026-02-23 00:09:15.278000394 +0000 UTC m=+153.211842240" lastFinishedPulling="2026-02-23 00:09:51.127725631 +0000 UTC m=+189.061567477" observedRunningTime="2026-02-23 00:09:51.796226787 +0000 UTC m=+189.730068643" watchObservedRunningTime="2026-02-23 00:09:51.798940391 +0000 UTC m=+189.732782237" Feb 23 00:09:51 crc kubenswrapper[4953]: I0223 00:09:51.817828 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m97ns" podStartSLOduration=1.942332207 podStartE2EDuration="37.817813293s" podCreationTimestamp="2026-02-23 00:09:14 +0000 UTC" firstStartedPulling="2026-02-23 00:09:15.340016638 +0000 UTC m=+153.273858484" lastFinishedPulling="2026-02-23 00:09:51.215497724 +0000 UTC m=+189.149339570" observedRunningTime="2026-02-23 00:09:51.812718475 +0000 UTC m=+189.746560341" watchObservedRunningTime="2026-02-23 00:09:51.817813293 +0000 UTC m=+189.751655139" Feb 23 00:09:51 crc kubenswrapper[4953]: I0223 00:09:51.838796 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-65pkm" podStartSLOduration=3.976827493 podStartE2EDuration="41.838776272s" podCreationTimestamp="2026-02-23 00:09:10 +0000 UTC" firstStartedPulling="2026-02-23 00:09:13.157068351 +0000 UTC m=+151.090910197" lastFinishedPulling="2026-02-23 00:09:51.01901712 +0000 UTC m=+188.952858976" observedRunningTime="2026-02-23 00:09:51.836100669 +0000 UTC m=+189.769942525" watchObservedRunningTime="2026-02-23 00:09:51.838776272 +0000 UTC m=+189.772618118" Feb 23 00:09:51 crc kubenswrapper[4953]: I0223 00:09:51.853903 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q77kp" podStartSLOduration=2.977639504 podStartE2EDuration="40.853885832s" podCreationTimestamp="2026-02-23 00:09:11 +0000 UTC" firstStartedPulling="2026-02-23 00:09:13.166755014 +0000 UTC m=+151.100596860" lastFinishedPulling="2026-02-23 00:09:51.043001342 +0000 UTC m=+188.976843188" observedRunningTime="2026-02-23 00:09:51.850273304 +0000 UTC m=+189.784115150" watchObservedRunningTime="2026-02-23 00:09:51.853885832 +0000 UTC m=+189.787727678" Feb 23 00:09:51 crc kubenswrapper[4953]: I0223 00:09:51.876841 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q77kp" Feb 23 00:09:51 crc kubenswrapper[4953]: I0223 00:09:51.876882 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q77kp" Feb 23 00:09:52 crc kubenswrapper[4953]: I0223 00:09:52.782136 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bfrxz" podUID="90ff8dec-87f8-49c9-a006-8134bca4e36f" containerName="registry-server" probeResult="failure" output=< Feb 23 00:09:52 crc kubenswrapper[4953]: timeout: failed to connect service ":50051" within 1s Feb 23 00:09:52 crc kubenswrapper[4953]: > Feb 23 00:09:52 crc kubenswrapper[4953]: I0223 00:09:52.917011 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-q77kp" podUID="153fb1b2-654f-412d-8c1f-e4f6c48f967f" containerName="registry-server" probeResult="failure" output=< Feb 23 00:09:52 crc kubenswrapper[4953]: timeout: failed to connect service ":50051" within 1s Feb 23 00:09:52 crc kubenswrapper[4953]: > Feb 23 00:09:53 crc kubenswrapper[4953]: I0223 00:09:53.286074 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l7fhk" Feb 23 00:09:53 crc kubenswrapper[4953]: I0223 00:09:53.286133 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l7fhk" Feb 23 00:09:53 crc kubenswrapper[4953]: I0223 00:09:53.349040 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l7fhk" Feb 23 00:09:53 crc kubenswrapper[4953]: I0223 00:09:53.367158 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l7fhk" podStartSLOduration=6.209308684 podStartE2EDuration="41.36714129s" podCreationTimestamp="2026-02-23 00:09:12 +0000 UTC" firstStartedPulling="2026-02-23 00:09:15.306063256 +0000 UTC m=+153.239905102" lastFinishedPulling="2026-02-23 00:09:50.463895862 +0000 UTC m=+188.397737708" observedRunningTime="2026-02-23 00:09:51.874765499 +0000 UTC m=+189.808607345" watchObservedRunningTime="2026-02-23 00:09:53.36714129 +0000 UTC m=+191.300983136" Feb 23 00:09:53 crc kubenswrapper[4953]: I0223 00:09:53.566890 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 00:09:53 crc kubenswrapper[4953]: I0223 00:09:53.731519 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bt7vz" Feb 23 00:09:53 crc kubenswrapper[4953]: I0223 00:09:53.731561 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bt7vz" Feb 23 00:09:53 crc kubenswrapper[4953]: I0223 00:09:53.774103 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bt7vz" Feb 23 00:09:53 crc kubenswrapper[4953]: I0223 00:09:53.980516 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sb7m5"] Feb 23 00:09:54 crc kubenswrapper[4953]: I0223 00:09:54.716232 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m97ns" Feb 23 00:09:54 crc kubenswrapper[4953]: I0223 00:09:54.716522 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m97ns" Feb 23 00:09:55 crc kubenswrapper[4953]: I0223 00:09:55.751463 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m97ns" podUID="5e69edf3-0e86-4ae1-ad9c-a887d52fb655" containerName="registry-server" probeResult="failure" output=< Feb 23 00:09:55 crc kubenswrapper[4953]: timeout: failed to connect service ":50051" within 1s Feb 23 00:09:55 crc kubenswrapper[4953]: > Feb 23 00:09:55 crc kubenswrapper[4953]: I0223 00:09:55.877945 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 23 00:09:55 crc kubenswrapper[4953]: E0223 00:09:55.878143 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ff22ff-a12d-409a-9c01-9c83064f7b6d" containerName="pruner" Feb 23 00:09:55 crc kubenswrapper[4953]: I0223 00:09:55.878154 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ff22ff-a12d-409a-9c01-9c83064f7b6d" containerName="pruner" Feb 23 00:09:55 crc kubenswrapper[4953]: E0223 00:09:55.878171 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c463426-aee8-41c2-8f08-e553efa4742a" containerName="image-pruner" Feb 23 00:09:55 crc kubenswrapper[4953]: I0223 00:09:55.878177 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c463426-aee8-41c2-8f08-e553efa4742a" containerName="image-pruner" Feb 23 00:09:55 crc kubenswrapper[4953]: E0223 00:09:55.878188 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31fc79b-a9ab-444e-b6b4-d267134f360c" containerName="pruner" Feb 23 00:09:55 crc kubenswrapper[4953]: I0223 00:09:55.878194 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31fc79b-a9ab-444e-b6b4-d267134f360c" containerName="pruner" Feb 23 00:09:55 crc kubenswrapper[4953]: I0223 00:09:55.878953 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ff22ff-a12d-409a-9c01-9c83064f7b6d" containerName="pruner" Feb 23 00:09:55 crc kubenswrapper[4953]: I0223 00:09:55.878969 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c463426-aee8-41c2-8f08-e553efa4742a" containerName="image-pruner" Feb 23 00:09:55 crc kubenswrapper[4953]: I0223 00:09:55.878981 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c31fc79b-a9ab-444e-b6b4-d267134f360c" containerName="pruner" Feb 23 00:09:55 crc kubenswrapper[4953]: I0223 00:09:55.879324 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 00:09:55 crc kubenswrapper[4953]: I0223 00:09:55.883031 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 23 00:09:55 crc kubenswrapper[4953]: I0223 00:09:55.883220 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 00:09:55 crc kubenswrapper[4953]: I0223 00:09:55.890654 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 23 00:09:56 crc kubenswrapper[4953]: I0223 00:09:56.029702 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84e94a31-e586-4f75-82f3-a509207fe57d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"84e94a31-e586-4f75-82f3-a509207fe57d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 00:09:56 crc kubenswrapper[4953]: I0223 00:09:56.029757 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84e94a31-e586-4f75-82f3-a509207fe57d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"84e94a31-e586-4f75-82f3-a509207fe57d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 00:09:56 crc kubenswrapper[4953]: I0223 00:09:56.131091 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84e94a31-e586-4f75-82f3-a509207fe57d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"84e94a31-e586-4f75-82f3-a509207fe57d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 00:09:56 crc kubenswrapper[4953]: I0223 00:09:56.131132 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84e94a31-e586-4f75-82f3-a509207fe57d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"84e94a31-e586-4f75-82f3-a509207fe57d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 00:09:56 crc kubenswrapper[4953]: I0223 00:09:56.131206 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84e94a31-e586-4f75-82f3-a509207fe57d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"84e94a31-e586-4f75-82f3-a509207fe57d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 00:09:56 crc kubenswrapper[4953]: I0223 00:09:56.149191 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84e94a31-e586-4f75-82f3-a509207fe57d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"84e94a31-e586-4f75-82f3-a509207fe57d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 00:09:56 crc kubenswrapper[4953]: I0223 00:09:56.233642 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 00:09:56 crc kubenswrapper[4953]: I0223 00:09:56.423029 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 23 00:09:56 crc kubenswrapper[4953]: W0223 00:09:56.431267 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod84e94a31_e586_4f75_82f3_a509207fe57d.slice/crio-c12160de5bd41fb0785262e8998c9e696e1249226de5363f6a0e5f077c3848d1 WatchSource:0}: Error finding container c12160de5bd41fb0785262e8998c9e696e1249226de5363f6a0e5f077c3848d1: Status 404 returned error can't find the container with id c12160de5bd41fb0785262e8998c9e696e1249226de5363f6a0e5f077c3848d1 Feb 23 00:09:56 crc kubenswrapper[4953]: I0223 00:09:56.812526 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"84e94a31-e586-4f75-82f3-a509207fe57d","Type":"ContainerStarted","Data":"c12160de5bd41fb0785262e8998c9e696e1249226de5363f6a0e5f077c3848d1"} Feb 23 00:09:57 crc kubenswrapper[4953]: I0223 00:09:57.824209 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"84e94a31-e586-4f75-82f3-a509207fe57d","Type":"ContainerStarted","Data":"008ae1961ae34fff214e4dfc5ef7b450e4b18e82142992a88521842cb11d5dad"} Feb 23 00:09:57 crc kubenswrapper[4953]: I0223 00:09:57.840426 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.840409769 podStartE2EDuration="2.840409769s" podCreationTimestamp="2026-02-23 00:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:09:57.834300897 +0000 UTC m=+195.768142753" watchObservedRunningTime="2026-02-23 00:09:57.840409769 +0000 UTC m=+195.774251615" Feb 23 00:09:58 crc kubenswrapper[4953]: I0223 00:09:58.830529 4953 generic.go:334] "Generic (PLEG): container finished" podID="84e94a31-e586-4f75-82f3-a509207fe57d" containerID="008ae1961ae34fff214e4dfc5ef7b450e4b18e82142992a88521842cb11d5dad" exitCode=0 Feb 23 00:09:58 crc kubenswrapper[4953]: I0223 00:09:58.830571 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"84e94a31-e586-4f75-82f3-a509207fe57d","Type":"ContainerDied","Data":"008ae1961ae34fff214e4dfc5ef7b450e4b18e82142992a88521842cb11d5dad"} Feb 23 00:10:00 crc kubenswrapper[4953]: I0223 00:10:00.037937 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 00:10:00 crc kubenswrapper[4953]: I0223 00:10:00.180005 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84e94a31-e586-4f75-82f3-a509207fe57d-kube-api-access\") pod \"84e94a31-e586-4f75-82f3-a509207fe57d\" (UID: \"84e94a31-e586-4f75-82f3-a509207fe57d\") " Feb 23 00:10:00 crc kubenswrapper[4953]: I0223 00:10:00.180107 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84e94a31-e586-4f75-82f3-a509207fe57d-kubelet-dir\") pod \"84e94a31-e586-4f75-82f3-a509207fe57d\" (UID: \"84e94a31-e586-4f75-82f3-a509207fe57d\") " Feb 23 00:10:00 crc kubenswrapper[4953]: I0223 00:10:00.180163 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84e94a31-e586-4f75-82f3-a509207fe57d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "84e94a31-e586-4f75-82f3-a509207fe57d" (UID: "84e94a31-e586-4f75-82f3-a509207fe57d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:10:00 crc kubenswrapper[4953]: I0223 00:10:00.180367 4953 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84e94a31-e586-4f75-82f3-a509207fe57d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:00 crc kubenswrapper[4953]: I0223 00:10:00.184937 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e94a31-e586-4f75-82f3-a509207fe57d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "84e94a31-e586-4f75-82f3-a509207fe57d" (UID: "84e94a31-e586-4f75-82f3-a509207fe57d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:10:00 crc kubenswrapper[4953]: I0223 00:10:00.281347 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84e94a31-e586-4f75-82f3-a509207fe57d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:00 crc kubenswrapper[4953]: I0223 00:10:00.844867 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 00:10:00 crc kubenswrapper[4953]: I0223 00:10:00.844936 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"84e94a31-e586-4f75-82f3-a509207fe57d","Type":"ContainerDied","Data":"c12160de5bd41fb0785262e8998c9e696e1249226de5363f6a0e5f077c3848d1"} Feb 23 00:10:00 crc kubenswrapper[4953]: I0223 00:10:00.844991 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c12160de5bd41fb0785262e8998c9e696e1249226de5363f6a0e5f077c3848d1" Feb 23 00:10:00 crc kubenswrapper[4953]: I0223 00:10:00.898649 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-65pkm" Feb 23 00:10:00 crc kubenswrapper[4953]: I0223 00:10:00.898726 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-65pkm" Feb 23 00:10:00 crc kubenswrapper[4953]: I0223 00:10:00.971115 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-65pkm" Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.667326 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bfrxz" Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.677019 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 23 00:10:01 crc kubenswrapper[4953]: E0223 00:10:01.677277 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e94a31-e586-4f75-82f3-a509207fe57d" containerName="pruner" Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.677309 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e94a31-e586-4f75-82f3-a509207fe57d" containerName="pruner" Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.677414 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e94a31-e586-4f75-82f3-a509207fe57d" containerName="pruner" Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.677874 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.683081 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.683486 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.689781 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.733236 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bfrxz" Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.800543 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263-var-lock\") pod \"installer-9-crc\" (UID: \"c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.800587 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.800623 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263-kube-api-access\") pod \"installer-9-crc\" (UID: \"c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.902913 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263-kube-api-access\") pod \"installer-9-crc\" (UID: \"c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.903172 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263-var-lock\") pod \"installer-9-crc\" (UID: \"c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.903216 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.903405 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.904041 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263-var-lock\") pod \"installer-9-crc\" (UID: \"c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.987661 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-65pkm" Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.992476 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q77kp" Feb 23 00:10:01 crc kubenswrapper[4953]: I0223 00:10:01.993552 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263-kube-api-access\") pod \"installer-9-crc\" (UID: \"c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 00:10:02 crc kubenswrapper[4953]: I0223 00:10:02.033469 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 00:10:02 crc kubenswrapper[4953]: I0223 00:10:02.530168 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q77kp" Feb 23 00:10:02 crc kubenswrapper[4953]: I0223 00:10:02.696524 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 23 00:10:02 crc kubenswrapper[4953]: I0223 00:10:02.856783 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263","Type":"ContainerStarted","Data":"f5b3d4e18a7de0e3b6f93939cb1e2bc1da50f07b977554d199febf8ac384bc1d"} Feb 23 00:10:03 crc kubenswrapper[4953]: I0223 00:10:03.352014 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l7fhk" Feb 23 00:10:03 crc kubenswrapper[4953]: I0223 00:10:03.777403 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bt7vz" Feb 23 00:10:03 crc kubenswrapper[4953]: I0223 00:10:03.865730 4953 generic.go:334] "Generic (PLEG): container finished" podID="b82e6868-6070-4c8b-9564-c7f0ae98c951" containerID="7515888e136dbefcd4619e1c17570323939333f5128a09a2bb9fc09c5259221e" exitCode=0 Feb 23 00:10:03 crc kubenswrapper[4953]: I0223 00:10:03.865845 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44v2s" event={"ID":"b82e6868-6070-4c8b-9564-c7f0ae98c951","Type":"ContainerDied","Data":"7515888e136dbefcd4619e1c17570323939333f5128a09a2bb9fc09c5259221e"} Feb 23 00:10:03 crc kubenswrapper[4953]: I0223 00:10:03.868122 4953 generic.go:334] "Generic (PLEG): container finished" podID="8b304dea-fc9b-4fc9-ae6a-6b8377e36578" containerID="4ab8c04e6d64cae31dacd98f524e91e9c7eadc94d68a42981f8402f96685304b" exitCode=0 Feb 23 00:10:03 crc kubenswrapper[4953]: I0223 00:10:03.868186 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pp2fr" event={"ID":"8b304dea-fc9b-4fc9-ae6a-6b8377e36578","Type":"ContainerDied","Data":"4ab8c04e6d64cae31dacd98f524e91e9c7eadc94d68a42981f8402f96685304b"} Feb 23 00:10:03 crc kubenswrapper[4953]: I0223 00:10:03.872727 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263","Type":"ContainerStarted","Data":"079b883eae44b359c1298d83dc807dd02f410810878efb788ffb519a836d0213"} Feb 23 00:10:03 crc kubenswrapper[4953]: I0223 00:10:03.921973 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.921951693 podStartE2EDuration="2.921951693s" podCreationTimestamp="2026-02-23 00:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:10:03.916754755 +0000 UTC m=+201.850596601" watchObservedRunningTime="2026-02-23 00:10:03.921951693 +0000 UTC m=+201.855793549" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.004873 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bfrxz"] Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.005143 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bfrxz" podUID="90ff8dec-87f8-49c9-a006-8134bca4e36f" containerName="registry-server" containerID="cri-o://c83b3132238c8ea82f50a14a7b3401e4b1bdd8cf652f81bfd5a7db34ddf31c69" gracePeriod=2 Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.359838 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfrxz" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.451777 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ff8dec-87f8-49c9-a006-8134bca4e36f-utilities\") pod \"90ff8dec-87f8-49c9-a006-8134bca4e36f\" (UID: \"90ff8dec-87f8-49c9-a006-8134bca4e36f\") " Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.451868 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ff8dec-87f8-49c9-a006-8134bca4e36f-catalog-content\") pod \"90ff8dec-87f8-49c9-a006-8134bca4e36f\" (UID: \"90ff8dec-87f8-49c9-a006-8134bca4e36f\") " Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.451901 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmb2b\" (UniqueName: \"kubernetes.io/projected/90ff8dec-87f8-49c9-a006-8134bca4e36f-kube-api-access-nmb2b\") pod \"90ff8dec-87f8-49c9-a006-8134bca4e36f\" (UID: \"90ff8dec-87f8-49c9-a006-8134bca4e36f\") " Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.453060 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90ff8dec-87f8-49c9-a006-8134bca4e36f-utilities" (OuterVolumeSpecName: "utilities") pod "90ff8dec-87f8-49c9-a006-8134bca4e36f" (UID: "90ff8dec-87f8-49c9-a006-8134bca4e36f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.458109 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ff8dec-87f8-49c9-a006-8134bca4e36f-kube-api-access-nmb2b" (OuterVolumeSpecName: "kube-api-access-nmb2b") pod "90ff8dec-87f8-49c9-a006-8134bca4e36f" (UID: "90ff8dec-87f8-49c9-a006-8134bca4e36f"). InnerVolumeSpecName "kube-api-access-nmb2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.510576 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90ff8dec-87f8-49c9-a006-8134bca4e36f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90ff8dec-87f8-49c9-a006-8134bca4e36f" (UID: "90ff8dec-87f8-49c9-a006-8134bca4e36f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.553738 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmb2b\" (UniqueName: \"kubernetes.io/projected/90ff8dec-87f8-49c9-a006-8134bca4e36f-kube-api-access-nmb2b\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.553774 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ff8dec-87f8-49c9-a006-8134bca4e36f-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.553787 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ff8dec-87f8-49c9-a006-8134bca4e36f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.606224 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q77kp"] Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.606539 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q77kp" podUID="153fb1b2-654f-412d-8c1f-e4f6c48f967f" containerName="registry-server" containerID="cri-o://e2207247cc61af4c8567aa56ada59ec07d7098d94719a14ab85220a22d83d9bc" gracePeriod=2 Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.768593 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m97ns" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.829840 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m97ns" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.882023 4953 generic.go:334] "Generic (PLEG): container finished" podID="90ff8dec-87f8-49c9-a006-8134bca4e36f" containerID="c83b3132238c8ea82f50a14a7b3401e4b1bdd8cf652f81bfd5a7db34ddf31c69" exitCode=0 Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.882091 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfrxz" event={"ID":"90ff8dec-87f8-49c9-a006-8134bca4e36f","Type":"ContainerDied","Data":"c83b3132238c8ea82f50a14a7b3401e4b1bdd8cf652f81bfd5a7db34ddf31c69"} Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.882119 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfrxz" event={"ID":"90ff8dec-87f8-49c9-a006-8134bca4e36f","Type":"ContainerDied","Data":"129a39c40670d6f4648586ee01e52402800b630ff01cd70b46c8a44cf87c5427"} Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.882135 4953 scope.go:117] "RemoveContainer" containerID="c83b3132238c8ea82f50a14a7b3401e4b1bdd8cf652f81bfd5a7db34ddf31c69" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.882298 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfrxz" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.889873 4953 generic.go:334] "Generic (PLEG): container finished" podID="153fb1b2-654f-412d-8c1f-e4f6c48f967f" containerID="e2207247cc61af4c8567aa56ada59ec07d7098d94719a14ab85220a22d83d9bc" exitCode=0 Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.889915 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q77kp" event={"ID":"153fb1b2-654f-412d-8c1f-e4f6c48f967f","Type":"ContainerDied","Data":"e2207247cc61af4c8567aa56ada59ec07d7098d94719a14ab85220a22d83d9bc"} Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.919846 4953 scope.go:117] "RemoveContainer" containerID="48552c0c685174d9023f6a74c696547ed96f246d9663f532ca80ebaea3c4c991" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.921964 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bfrxz"] Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.928120 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bfrxz"] Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.939095 4953 scope.go:117] "RemoveContainer" containerID="ba11f2cdd2dd81496ac3dc550a5f252857ca7723ca241fe6bc37899c300892a0" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.953980 4953 scope.go:117] "RemoveContainer" containerID="c83b3132238c8ea82f50a14a7b3401e4b1bdd8cf652f81bfd5a7db34ddf31c69" Feb 23 00:10:04 crc kubenswrapper[4953]: E0223 00:10:04.954617 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c83b3132238c8ea82f50a14a7b3401e4b1bdd8cf652f81bfd5a7db34ddf31c69\": container with ID starting with c83b3132238c8ea82f50a14a7b3401e4b1bdd8cf652f81bfd5a7db34ddf31c69 not found: ID does not exist" containerID="c83b3132238c8ea82f50a14a7b3401e4b1bdd8cf652f81bfd5a7db34ddf31c69" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.954649 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c83b3132238c8ea82f50a14a7b3401e4b1bdd8cf652f81bfd5a7db34ddf31c69"} err="failed to get container status \"c83b3132238c8ea82f50a14a7b3401e4b1bdd8cf652f81bfd5a7db34ddf31c69\": rpc error: code = NotFound desc = could not find container \"c83b3132238c8ea82f50a14a7b3401e4b1bdd8cf652f81bfd5a7db34ddf31c69\": container with ID starting with c83b3132238c8ea82f50a14a7b3401e4b1bdd8cf652f81bfd5a7db34ddf31c69 not found: ID does not exist" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.954689 4953 scope.go:117] "RemoveContainer" containerID="48552c0c685174d9023f6a74c696547ed96f246d9663f532ca80ebaea3c4c991" Feb 23 00:10:04 crc kubenswrapper[4953]: E0223 00:10:04.954995 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48552c0c685174d9023f6a74c696547ed96f246d9663f532ca80ebaea3c4c991\": container with ID starting with 48552c0c685174d9023f6a74c696547ed96f246d9663f532ca80ebaea3c4c991 not found: ID does not exist" containerID="48552c0c685174d9023f6a74c696547ed96f246d9663f532ca80ebaea3c4c991" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.955018 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48552c0c685174d9023f6a74c696547ed96f246d9663f532ca80ebaea3c4c991"} err="failed to get container status \"48552c0c685174d9023f6a74c696547ed96f246d9663f532ca80ebaea3c4c991\": rpc error: code = NotFound desc = could not find container \"48552c0c685174d9023f6a74c696547ed96f246d9663f532ca80ebaea3c4c991\": container with ID starting with 48552c0c685174d9023f6a74c696547ed96f246d9663f532ca80ebaea3c4c991 not found: ID does not exist" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.955030 4953 scope.go:117] "RemoveContainer" containerID="ba11f2cdd2dd81496ac3dc550a5f252857ca7723ca241fe6bc37899c300892a0" Feb 23 00:10:04 crc kubenswrapper[4953]: E0223 00:10:04.955254 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba11f2cdd2dd81496ac3dc550a5f252857ca7723ca241fe6bc37899c300892a0\": container with ID starting with ba11f2cdd2dd81496ac3dc550a5f252857ca7723ca241fe6bc37899c300892a0 not found: ID does not exist" containerID="ba11f2cdd2dd81496ac3dc550a5f252857ca7723ca241fe6bc37899c300892a0" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.955272 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba11f2cdd2dd81496ac3dc550a5f252857ca7723ca241fe6bc37899c300892a0"} err="failed to get container status \"ba11f2cdd2dd81496ac3dc550a5f252857ca7723ca241fe6bc37899c300892a0\": rpc error: code = NotFound desc = could not find container \"ba11f2cdd2dd81496ac3dc550a5f252857ca7723ca241fe6bc37899c300892a0\": container with ID starting with ba11f2cdd2dd81496ac3dc550a5f252857ca7723ca241fe6bc37899c300892a0 not found: ID does not exist" Feb 23 00:10:04 crc kubenswrapper[4953]: I0223 00:10:04.967770 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q77kp" Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.059539 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/153fb1b2-654f-412d-8c1f-e4f6c48f967f-utilities\") pod \"153fb1b2-654f-412d-8c1f-e4f6c48f967f\" (UID: \"153fb1b2-654f-412d-8c1f-e4f6c48f967f\") " Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.059581 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6kdk\" (UniqueName: \"kubernetes.io/projected/153fb1b2-654f-412d-8c1f-e4f6c48f967f-kube-api-access-l6kdk\") pod \"153fb1b2-654f-412d-8c1f-e4f6c48f967f\" (UID: \"153fb1b2-654f-412d-8c1f-e4f6c48f967f\") " Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.059649 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/153fb1b2-654f-412d-8c1f-e4f6c48f967f-catalog-content\") pod \"153fb1b2-654f-412d-8c1f-e4f6c48f967f\" (UID: \"153fb1b2-654f-412d-8c1f-e4f6c48f967f\") " Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.061826 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/153fb1b2-654f-412d-8c1f-e4f6c48f967f-utilities" (OuterVolumeSpecName: "utilities") pod "153fb1b2-654f-412d-8c1f-e4f6c48f967f" (UID: "153fb1b2-654f-412d-8c1f-e4f6c48f967f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.064650 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/153fb1b2-654f-412d-8c1f-e4f6c48f967f-kube-api-access-l6kdk" (OuterVolumeSpecName: "kube-api-access-l6kdk") pod "153fb1b2-654f-412d-8c1f-e4f6c48f967f" (UID: "153fb1b2-654f-412d-8c1f-e4f6c48f967f"). InnerVolumeSpecName "kube-api-access-l6kdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.111018 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/153fb1b2-654f-412d-8c1f-e4f6c48f967f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "153fb1b2-654f-412d-8c1f-e4f6c48f967f" (UID: "153fb1b2-654f-412d-8c1f-e4f6c48f967f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.161189 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/153fb1b2-654f-412d-8c1f-e4f6c48f967f-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.161247 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6kdk\" (UniqueName: \"kubernetes.io/projected/153fb1b2-654f-412d-8c1f-e4f6c48f967f-kube-api-access-l6kdk\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.161269 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/153fb1b2-654f-412d-8c1f-e4f6c48f967f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.336091 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90ff8dec-87f8-49c9-a006-8134bca4e36f" path="/var/lib/kubelet/pods/90ff8dec-87f8-49c9-a006-8134bca4e36f/volumes" Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.897245 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44v2s" event={"ID":"b82e6868-6070-4c8b-9564-c7f0ae98c951","Type":"ContainerStarted","Data":"cd4e8ab3408bd485cc7fb2f2e88cf6a43f277ee7347884348235f735c705bd1d"} Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.899760 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pp2fr" event={"ID":"8b304dea-fc9b-4fc9-ae6a-6b8377e36578","Type":"ContainerStarted","Data":"1c8263f8b8fbd48e2dd098a38dfa9c6b1a73ba7ff8d510d98b24d04db7b52586"} Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.902566 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q77kp" Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.903003 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q77kp" event={"ID":"153fb1b2-654f-412d-8c1f-e4f6c48f967f","Type":"ContainerDied","Data":"922e9a8876f8dcb315e6981839c6519cef9cf9df311abc15e7a3b51207bb8752"} Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.903035 4953 scope.go:117] "RemoveContainer" containerID="e2207247cc61af4c8567aa56ada59ec07d7098d94719a14ab85220a22d83d9bc" Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.916248 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-44v2s" podStartSLOduration=2.7675297370000003 podStartE2EDuration="54.916235861s" podCreationTimestamp="2026-02-23 00:09:11 +0000 UTC" firstStartedPulling="2026-02-23 00:09:13.161156202 +0000 UTC m=+151.094998048" lastFinishedPulling="2026-02-23 00:10:05.309862326 +0000 UTC m=+203.243704172" observedRunningTime="2026-02-23 00:10:05.915795559 +0000 UTC m=+203.849637405" watchObservedRunningTime="2026-02-23 00:10:05.916235861 +0000 UTC m=+203.850077707" Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.926687 4953 scope.go:117] "RemoveContainer" containerID="bdbc2043ce28728038111d9fdff3621675a2f972a4d3f8e14a875a0cb92147af" Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.949281 4953 scope.go:117] "RemoveContainer" containerID="ae763557ff0c86a17c50a20076ba3482df4efb0222773a6e3b495060eae34099" Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.955900 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pp2fr" podStartSLOduration=2.961590329 podStartE2EDuration="52.955883339s" podCreationTimestamp="2026-02-23 00:09:13 +0000 UTC" firstStartedPulling="2026-02-23 00:09:15.35997713 +0000 UTC m=+153.293818976" lastFinishedPulling="2026-02-23 00:10:05.35427015 +0000 UTC m=+203.288111986" observedRunningTime="2026-02-23 00:10:05.939953624 +0000 UTC m=+203.873795470" watchObservedRunningTime="2026-02-23 00:10:05.955883339 +0000 UTC m=+203.889725185" Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.958366 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q77kp"] Feb 23 00:10:05 crc kubenswrapper[4953]: I0223 00:10:05.962691 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q77kp"] Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.404332 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bt7vz"] Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.404840 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bt7vz" podUID="878d6a76-f92a-46cc-aaad-4b4a2fba9574" containerName="registry-server" containerID="cri-o://31990e897b050f445b2a26ada477bdccd08b53cc7e475113aad9b3e90e3f0c06" gracePeriod=2 Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.737493 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bt7vz" Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.880391 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/878d6a76-f92a-46cc-aaad-4b4a2fba9574-utilities\") pod \"878d6a76-f92a-46cc-aaad-4b4a2fba9574\" (UID: \"878d6a76-f92a-46cc-aaad-4b4a2fba9574\") " Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.880445 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7zdl\" (UniqueName: \"kubernetes.io/projected/878d6a76-f92a-46cc-aaad-4b4a2fba9574-kube-api-access-c7zdl\") pod \"878d6a76-f92a-46cc-aaad-4b4a2fba9574\" (UID: \"878d6a76-f92a-46cc-aaad-4b4a2fba9574\") " Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.880515 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/878d6a76-f92a-46cc-aaad-4b4a2fba9574-catalog-content\") pod \"878d6a76-f92a-46cc-aaad-4b4a2fba9574\" (UID: \"878d6a76-f92a-46cc-aaad-4b4a2fba9574\") " Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.881163 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/878d6a76-f92a-46cc-aaad-4b4a2fba9574-utilities" (OuterVolumeSpecName: "utilities") pod "878d6a76-f92a-46cc-aaad-4b4a2fba9574" (UID: "878d6a76-f92a-46cc-aaad-4b4a2fba9574"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.885458 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/878d6a76-f92a-46cc-aaad-4b4a2fba9574-kube-api-access-c7zdl" (OuterVolumeSpecName: "kube-api-access-c7zdl") pod "878d6a76-f92a-46cc-aaad-4b4a2fba9574" (UID: "878d6a76-f92a-46cc-aaad-4b4a2fba9574"). InnerVolumeSpecName "kube-api-access-c7zdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.913104 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/878d6a76-f92a-46cc-aaad-4b4a2fba9574-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "878d6a76-f92a-46cc-aaad-4b4a2fba9574" (UID: "878d6a76-f92a-46cc-aaad-4b4a2fba9574"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.915111 4953 generic.go:334] "Generic (PLEG): container finished" podID="878d6a76-f92a-46cc-aaad-4b4a2fba9574" containerID="31990e897b050f445b2a26ada477bdccd08b53cc7e475113aad9b3e90e3f0c06" exitCode=0 Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.915153 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bt7vz" event={"ID":"878d6a76-f92a-46cc-aaad-4b4a2fba9574","Type":"ContainerDied","Data":"31990e897b050f445b2a26ada477bdccd08b53cc7e475113aad9b3e90e3f0c06"} Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.915185 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bt7vz" event={"ID":"878d6a76-f92a-46cc-aaad-4b4a2fba9574","Type":"ContainerDied","Data":"7a57ee74f30d292512b0ef3d4d73fa0fc8e49242ce113feb6d17e094f3c7b8e1"} Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.915181 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bt7vz" Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.915201 4953 scope.go:117] "RemoveContainer" containerID="31990e897b050f445b2a26ada477bdccd08b53cc7e475113aad9b3e90e3f0c06" Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.927202 4953 scope.go:117] "RemoveContainer" containerID="6a0f8b67871c3cddbb3e106b44e5b19dd6336ada2b18807b9f66d46ede150e5a" Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.945896 4953 scope.go:117] "RemoveContainer" containerID="299d2f76a7c30bcb6b505ed1d70ca0516ebf6b43c08f5ef4f39028dcb0f5e7a0" Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.953264 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bt7vz"] Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.956993 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bt7vz"] Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.970085 4953 scope.go:117] "RemoveContainer" containerID="31990e897b050f445b2a26ada477bdccd08b53cc7e475113aad9b3e90e3f0c06" Feb 23 00:10:06 crc kubenswrapper[4953]: E0223 00:10:06.970456 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31990e897b050f445b2a26ada477bdccd08b53cc7e475113aad9b3e90e3f0c06\": container with ID starting with 31990e897b050f445b2a26ada477bdccd08b53cc7e475113aad9b3e90e3f0c06 not found: ID does not exist" containerID="31990e897b050f445b2a26ada477bdccd08b53cc7e475113aad9b3e90e3f0c06" Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.970555 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31990e897b050f445b2a26ada477bdccd08b53cc7e475113aad9b3e90e3f0c06"} err="failed to get container status \"31990e897b050f445b2a26ada477bdccd08b53cc7e475113aad9b3e90e3f0c06\": rpc error: code = NotFound desc = could not find container \"31990e897b050f445b2a26ada477bdccd08b53cc7e475113aad9b3e90e3f0c06\": container with ID starting with 31990e897b050f445b2a26ada477bdccd08b53cc7e475113aad9b3e90e3f0c06 not found: ID does not exist" Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.970640 4953 scope.go:117] "RemoveContainer" containerID="6a0f8b67871c3cddbb3e106b44e5b19dd6336ada2b18807b9f66d46ede150e5a" Feb 23 00:10:06 crc kubenswrapper[4953]: E0223 00:10:06.970890 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a0f8b67871c3cddbb3e106b44e5b19dd6336ada2b18807b9f66d46ede150e5a\": container with ID starting with 6a0f8b67871c3cddbb3e106b44e5b19dd6336ada2b18807b9f66d46ede150e5a not found: ID does not exist" containerID="6a0f8b67871c3cddbb3e106b44e5b19dd6336ada2b18807b9f66d46ede150e5a" Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.970963 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a0f8b67871c3cddbb3e106b44e5b19dd6336ada2b18807b9f66d46ede150e5a"} err="failed to get container status \"6a0f8b67871c3cddbb3e106b44e5b19dd6336ada2b18807b9f66d46ede150e5a\": rpc error: code = NotFound desc = could not find container \"6a0f8b67871c3cddbb3e106b44e5b19dd6336ada2b18807b9f66d46ede150e5a\": container with ID starting with 6a0f8b67871c3cddbb3e106b44e5b19dd6336ada2b18807b9f66d46ede150e5a not found: ID does not exist" Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.971044 4953 scope.go:117] "RemoveContainer" containerID="299d2f76a7c30bcb6b505ed1d70ca0516ebf6b43c08f5ef4f39028dcb0f5e7a0" Feb 23 00:10:06 crc kubenswrapper[4953]: E0223 00:10:06.971299 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"299d2f76a7c30bcb6b505ed1d70ca0516ebf6b43c08f5ef4f39028dcb0f5e7a0\": container with ID starting with 299d2f76a7c30bcb6b505ed1d70ca0516ebf6b43c08f5ef4f39028dcb0f5e7a0 not found: ID does not exist" containerID="299d2f76a7c30bcb6b505ed1d70ca0516ebf6b43c08f5ef4f39028dcb0f5e7a0" Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.971398 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"299d2f76a7c30bcb6b505ed1d70ca0516ebf6b43c08f5ef4f39028dcb0f5e7a0"} err="failed to get container status \"299d2f76a7c30bcb6b505ed1d70ca0516ebf6b43c08f5ef4f39028dcb0f5e7a0\": rpc error: code = NotFound desc = could not find container \"299d2f76a7c30bcb6b505ed1d70ca0516ebf6b43c08f5ef4f39028dcb0f5e7a0\": container with ID starting with 299d2f76a7c30bcb6b505ed1d70ca0516ebf6b43c08f5ef4f39028dcb0f5e7a0 not found: ID does not exist" Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.982823 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/878d6a76-f92a-46cc-aaad-4b4a2fba9574-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.982935 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7zdl\" (UniqueName: \"kubernetes.io/projected/878d6a76-f92a-46cc-aaad-4b4a2fba9574-kube-api-access-c7zdl\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:06 crc kubenswrapper[4953]: I0223 00:10:06.983018 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/878d6a76-f92a-46cc-aaad-4b4a2fba9574-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.003431 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m97ns"] Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.003834 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m97ns" podUID="5e69edf3-0e86-4ae1-ad9c-a887d52fb655" containerName="registry-server" containerID="cri-o://95d89271c6b372abcfceb145d2f16efff72ed19478ab1eb850d394e93fb682bc" gracePeriod=2 Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.303247 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m97ns" Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.331639 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="153fb1b2-654f-412d-8c1f-e4f6c48f967f" path="/var/lib/kubelet/pods/153fb1b2-654f-412d-8c1f-e4f6c48f967f/volumes" Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.332203 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="878d6a76-f92a-46cc-aaad-4b4a2fba9574" path="/var/lib/kubelet/pods/878d6a76-f92a-46cc-aaad-4b4a2fba9574/volumes" Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.387257 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e69edf3-0e86-4ae1-ad9c-a887d52fb655-catalog-content\") pod \"5e69edf3-0e86-4ae1-ad9c-a887d52fb655\" (UID: \"5e69edf3-0e86-4ae1-ad9c-a887d52fb655\") " Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.387363 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e69edf3-0e86-4ae1-ad9c-a887d52fb655-utilities\") pod \"5e69edf3-0e86-4ae1-ad9c-a887d52fb655\" (UID: \"5e69edf3-0e86-4ae1-ad9c-a887d52fb655\") " Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.387396 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsdvt\" (UniqueName: \"kubernetes.io/projected/5e69edf3-0e86-4ae1-ad9c-a887d52fb655-kube-api-access-dsdvt\") pod \"5e69edf3-0e86-4ae1-ad9c-a887d52fb655\" (UID: \"5e69edf3-0e86-4ae1-ad9c-a887d52fb655\") " Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.388600 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e69edf3-0e86-4ae1-ad9c-a887d52fb655-utilities" (OuterVolumeSpecName: "utilities") pod "5e69edf3-0e86-4ae1-ad9c-a887d52fb655" (UID: "5e69edf3-0e86-4ae1-ad9c-a887d52fb655"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.391676 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e69edf3-0e86-4ae1-ad9c-a887d52fb655-kube-api-access-dsdvt" (OuterVolumeSpecName: "kube-api-access-dsdvt") pod "5e69edf3-0e86-4ae1-ad9c-a887d52fb655" (UID: "5e69edf3-0e86-4ae1-ad9c-a887d52fb655"). InnerVolumeSpecName "kube-api-access-dsdvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.489144 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e69edf3-0e86-4ae1-ad9c-a887d52fb655-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.489177 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsdvt\" (UniqueName: \"kubernetes.io/projected/5e69edf3-0e86-4ae1-ad9c-a887d52fb655-kube-api-access-dsdvt\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.510127 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e69edf3-0e86-4ae1-ad9c-a887d52fb655-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e69edf3-0e86-4ae1-ad9c-a887d52fb655" (UID: "5e69edf3-0e86-4ae1-ad9c-a887d52fb655"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.590513 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e69edf3-0e86-4ae1-ad9c-a887d52fb655-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.922361 4953 generic.go:334] "Generic (PLEG): container finished" podID="5e69edf3-0e86-4ae1-ad9c-a887d52fb655" containerID="95d89271c6b372abcfceb145d2f16efff72ed19478ab1eb850d394e93fb682bc" exitCode=0 Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.922399 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m97ns" event={"ID":"5e69edf3-0e86-4ae1-ad9c-a887d52fb655","Type":"ContainerDied","Data":"95d89271c6b372abcfceb145d2f16efff72ed19478ab1eb850d394e93fb682bc"} Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.922420 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m97ns" event={"ID":"5e69edf3-0e86-4ae1-ad9c-a887d52fb655","Type":"ContainerDied","Data":"6ae50be8fd4d9f937619a46c0e18a7ab7f33efab9fada8c2d5314409ec05fae9"} Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.922426 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m97ns" Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.922456 4953 scope.go:117] "RemoveContainer" containerID="95d89271c6b372abcfceb145d2f16efff72ed19478ab1eb850d394e93fb682bc" Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.934740 4953 scope.go:117] "RemoveContainer" containerID="f143a670af65caf10b0f2e5f6894b473691091f03f9f8938224f6597505dede6" Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.952316 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m97ns"] Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.958311 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m97ns"] Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.971373 4953 scope.go:117] "RemoveContainer" containerID="aabc05c33f90f4aed082dc155f0439b9a9f940141f94adc0d279b3e4d1f13e6b" Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.992756 4953 scope.go:117] "RemoveContainer" containerID="95d89271c6b372abcfceb145d2f16efff72ed19478ab1eb850d394e93fb682bc" Feb 23 00:10:07 crc kubenswrapper[4953]: E0223 00:10:07.993915 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95d89271c6b372abcfceb145d2f16efff72ed19478ab1eb850d394e93fb682bc\": container with ID starting with 95d89271c6b372abcfceb145d2f16efff72ed19478ab1eb850d394e93fb682bc not found: ID does not exist" containerID="95d89271c6b372abcfceb145d2f16efff72ed19478ab1eb850d394e93fb682bc" Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.993976 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d89271c6b372abcfceb145d2f16efff72ed19478ab1eb850d394e93fb682bc"} err="failed to get container status \"95d89271c6b372abcfceb145d2f16efff72ed19478ab1eb850d394e93fb682bc\": rpc error: code = NotFound desc = could not find container \"95d89271c6b372abcfceb145d2f16efff72ed19478ab1eb850d394e93fb682bc\": container with ID starting with 95d89271c6b372abcfceb145d2f16efff72ed19478ab1eb850d394e93fb682bc not found: ID does not exist" Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.994017 4953 scope.go:117] "RemoveContainer" containerID="f143a670af65caf10b0f2e5f6894b473691091f03f9f8938224f6597505dede6" Feb 23 00:10:07 crc kubenswrapper[4953]: E0223 00:10:07.994403 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f143a670af65caf10b0f2e5f6894b473691091f03f9f8938224f6597505dede6\": container with ID starting with f143a670af65caf10b0f2e5f6894b473691091f03f9f8938224f6597505dede6 not found: ID does not exist" containerID="f143a670af65caf10b0f2e5f6894b473691091f03f9f8938224f6597505dede6" Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.994445 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f143a670af65caf10b0f2e5f6894b473691091f03f9f8938224f6597505dede6"} err="failed to get container status \"f143a670af65caf10b0f2e5f6894b473691091f03f9f8938224f6597505dede6\": rpc error: code = NotFound desc = could not find container \"f143a670af65caf10b0f2e5f6894b473691091f03f9f8938224f6597505dede6\": container with ID starting with f143a670af65caf10b0f2e5f6894b473691091f03f9f8938224f6597505dede6 not found: ID does not exist" Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.994479 4953 scope.go:117] "RemoveContainer" containerID="aabc05c33f90f4aed082dc155f0439b9a9f940141f94adc0d279b3e4d1f13e6b" Feb 23 00:10:07 crc kubenswrapper[4953]: E0223 00:10:07.994807 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aabc05c33f90f4aed082dc155f0439b9a9f940141f94adc0d279b3e4d1f13e6b\": container with ID starting with aabc05c33f90f4aed082dc155f0439b9a9f940141f94adc0d279b3e4d1f13e6b not found: ID does not exist" containerID="aabc05c33f90f4aed082dc155f0439b9a9f940141f94adc0d279b3e4d1f13e6b" Feb 23 00:10:07 crc kubenswrapper[4953]: I0223 00:10:07.994839 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aabc05c33f90f4aed082dc155f0439b9a9f940141f94adc0d279b3e4d1f13e6b"} err="failed to get container status \"aabc05c33f90f4aed082dc155f0439b9a9f940141f94adc0d279b3e4d1f13e6b\": rpc error: code = NotFound desc = could not find container \"aabc05c33f90f4aed082dc155f0439b9a9f940141f94adc0d279b3e4d1f13e6b\": container with ID starting with aabc05c33f90f4aed082dc155f0439b9a9f940141f94adc0d279b3e4d1f13e6b not found: ID does not exist" Feb 23 00:10:09 crc kubenswrapper[4953]: I0223 00:10:09.334389 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e69edf3-0e86-4ae1-ad9c-a887d52fb655" path="/var/lib/kubelet/pods/5e69edf3-0e86-4ae1-ad9c-a887d52fb655/volumes" Feb 23 00:10:11 crc kubenswrapper[4953]: I0223 00:10:11.686216 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-44v2s" Feb 23 00:10:11 crc kubenswrapper[4953]: I0223 00:10:11.686799 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-44v2s" Feb 23 00:10:11 crc kubenswrapper[4953]: I0223 00:10:11.724885 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-44v2s" Feb 23 00:10:12 crc kubenswrapper[4953]: I0223 00:10:12.005158 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-44v2s" Feb 23 00:10:14 crc kubenswrapper[4953]: I0223 00:10:14.339164 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pp2fr" Feb 23 00:10:14 crc kubenswrapper[4953]: I0223 00:10:14.340402 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pp2fr" Feb 23 00:10:14 crc kubenswrapper[4953]: I0223 00:10:14.408372 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pp2fr" Feb 23 00:10:14 crc kubenswrapper[4953]: I0223 00:10:14.699779 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:10:14 crc kubenswrapper[4953]: I0223 00:10:14.699900 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:10:14 crc kubenswrapper[4953]: I0223 00:10:14.699973 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:10:14 crc kubenswrapper[4953]: I0223 00:10:14.700801 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08"} pod="openshift-machine-config-operator/machine-config-daemon-gpl86" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 00:10:14 crc kubenswrapper[4953]: I0223 00:10:14.700954 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" containerID="cri-o://78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08" gracePeriod=600 Feb 23 00:10:15 crc kubenswrapper[4953]: I0223 00:10:15.029718 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pp2fr" Feb 23 00:10:15 crc kubenswrapper[4953]: I0223 00:10:15.973970 4953 generic.go:334] "Generic (PLEG): container finished" podID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerID="78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08" exitCode=0 Feb 23 00:10:15 crc kubenswrapper[4953]: I0223 00:10:15.974051 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" event={"ID":"fca7afa5-1274-436d-ab61-9e8796e4774c","Type":"ContainerDied","Data":"78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08"} Feb 23 00:10:15 crc kubenswrapper[4953]: I0223 00:10:15.974730 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" event={"ID":"fca7afa5-1274-436d-ab61-9e8796e4774c","Type":"ContainerStarted","Data":"e85dfaed3628c17b672280ee0d620d6df9b175b1e8985b9cad3e96240e250b5d"} Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.005324 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" podUID="c90b2280-0314-4b8a-979f-d678ee9a4a98" containerName="oauth-openshift" containerID="cri-o://b0e05d6d031ba8d913cbfa5746fff8f9f43db64b9a742d3863072a48de4b0741" gracePeriod=15 Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.356264 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.397980 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7b49777cd7-krtjr"] Feb 23 00:10:19 crc kubenswrapper[4953]: E0223 00:10:19.398384 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153fb1b2-654f-412d-8c1f-e4f6c48f967f" containerName="extract-utilities" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.398401 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="153fb1b2-654f-412d-8c1f-e4f6c48f967f" containerName="extract-utilities" Feb 23 00:10:19 crc kubenswrapper[4953]: E0223 00:10:19.398417 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878d6a76-f92a-46cc-aaad-4b4a2fba9574" containerName="extract-utilities" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.398423 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="878d6a76-f92a-46cc-aaad-4b4a2fba9574" containerName="extract-utilities" Feb 23 00:10:19 crc kubenswrapper[4953]: E0223 00:10:19.398440 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878d6a76-f92a-46cc-aaad-4b4a2fba9574" containerName="registry-server" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.398446 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="878d6a76-f92a-46cc-aaad-4b4a2fba9574" containerName="registry-server" Feb 23 00:10:19 crc kubenswrapper[4953]: E0223 00:10:19.398457 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e69edf3-0e86-4ae1-ad9c-a887d52fb655" containerName="extract-content" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.398463 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e69edf3-0e86-4ae1-ad9c-a887d52fb655" containerName="extract-content" Feb 23 00:10:19 crc kubenswrapper[4953]: E0223 00:10:19.398475 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153fb1b2-654f-412d-8c1f-e4f6c48f967f" containerName="registry-server" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.398482 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="153fb1b2-654f-412d-8c1f-e4f6c48f967f" containerName="registry-server" Feb 23 00:10:19 crc kubenswrapper[4953]: E0223 00:10:19.398495 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c90b2280-0314-4b8a-979f-d678ee9a4a98" containerName="oauth-openshift" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.398501 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c90b2280-0314-4b8a-979f-d678ee9a4a98" containerName="oauth-openshift" Feb 23 00:10:19 crc kubenswrapper[4953]: E0223 00:10:19.398513 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153fb1b2-654f-412d-8c1f-e4f6c48f967f" containerName="extract-content" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.398519 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="153fb1b2-654f-412d-8c1f-e4f6c48f967f" containerName="extract-content" Feb 23 00:10:19 crc kubenswrapper[4953]: E0223 00:10:19.398527 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e69edf3-0e86-4ae1-ad9c-a887d52fb655" containerName="registry-server" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.398533 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e69edf3-0e86-4ae1-ad9c-a887d52fb655" containerName="registry-server" Feb 23 00:10:19 crc kubenswrapper[4953]: E0223 00:10:19.398544 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ff8dec-87f8-49c9-a006-8134bca4e36f" containerName="registry-server" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.398550 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ff8dec-87f8-49c9-a006-8134bca4e36f" containerName="registry-server" Feb 23 00:10:19 crc kubenswrapper[4953]: E0223 00:10:19.398573 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ff8dec-87f8-49c9-a006-8134bca4e36f" containerName="extract-content" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.398579 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ff8dec-87f8-49c9-a006-8134bca4e36f" containerName="extract-content" Feb 23 00:10:19 crc kubenswrapper[4953]: E0223 00:10:19.398588 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e69edf3-0e86-4ae1-ad9c-a887d52fb655" containerName="extract-utilities" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.398595 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e69edf3-0e86-4ae1-ad9c-a887d52fb655" containerName="extract-utilities" Feb 23 00:10:19 crc kubenswrapper[4953]: E0223 00:10:19.398607 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ff8dec-87f8-49c9-a006-8134bca4e36f" containerName="extract-utilities" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.398613 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ff8dec-87f8-49c9-a006-8134bca4e36f" containerName="extract-utilities" Feb 23 00:10:19 crc kubenswrapper[4953]: E0223 00:10:19.398626 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878d6a76-f92a-46cc-aaad-4b4a2fba9574" containerName="extract-content" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.398633 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="878d6a76-f92a-46cc-aaad-4b4a2fba9574" containerName="extract-content" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.398779 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="878d6a76-f92a-46cc-aaad-4b4a2fba9574" containerName="registry-server" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.398788 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e69edf3-0e86-4ae1-ad9c-a887d52fb655" containerName="registry-server" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.398797 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="90ff8dec-87f8-49c9-a006-8134bca4e36f" containerName="registry-server" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.398818 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c90b2280-0314-4b8a-979f-d678ee9a4a98" containerName="oauth-openshift" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.398829 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="153fb1b2-654f-412d-8c1f-e4f6c48f967f" containerName="registry-server" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.400455 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.409371 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b49777cd7-krtjr"] Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.535282 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vjqf\" (UniqueName: \"kubernetes.io/projected/c90b2280-0314-4b8a-979f-d678ee9a4a98-kube-api-access-8vjqf\") pod \"c90b2280-0314-4b8a-979f-d678ee9a4a98\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.535379 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-trusted-ca-bundle\") pod \"c90b2280-0314-4b8a-979f-d678ee9a4a98\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.535410 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-idp-0-file-data\") pod \"c90b2280-0314-4b8a-979f-d678ee9a4a98\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.535430 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-serving-cert\") pod \"c90b2280-0314-4b8a-979f-d678ee9a4a98\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.535464 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-session\") pod \"c90b2280-0314-4b8a-979f-d678ee9a4a98\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.535497 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-cliconfig\") pod \"c90b2280-0314-4b8a-979f-d678ee9a4a98\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.535527 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-service-ca\") pod \"c90b2280-0314-4b8a-979f-d678ee9a4a98\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.535559 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-router-certs\") pod \"c90b2280-0314-4b8a-979f-d678ee9a4a98\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.535601 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-template-provider-selection\") pod \"c90b2280-0314-4b8a-979f-d678ee9a4a98\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.535632 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-template-login\") pod \"c90b2280-0314-4b8a-979f-d678ee9a4a98\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.535663 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-ocp-branding-template\") pod \"c90b2280-0314-4b8a-979f-d678ee9a4a98\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.535697 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c90b2280-0314-4b8a-979f-d678ee9a4a98-audit-dir\") pod \"c90b2280-0314-4b8a-979f-d678ee9a4a98\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.535728 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-audit-policies\") pod \"c90b2280-0314-4b8a-979f-d678ee9a4a98\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.535755 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-template-error\") pod \"c90b2280-0314-4b8a-979f-d678ee9a4a98\" (UID: \"c90b2280-0314-4b8a-979f-d678ee9a4a98\") " Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.535934 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.535962 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7b95734a-ba78-48fc-8c59-28d3cbd8276d-audit-dir\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.535988 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.536017 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-user-template-login\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.536045 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-session\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.536091 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.536116 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.536141 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-user-template-error\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.536162 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.536193 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.536218 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.536251 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.536275 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7b95734a-ba78-48fc-8c59-28d3cbd8276d-audit-policies\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.536317 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qscpf\" (UniqueName: \"kubernetes.io/projected/7b95734a-ba78-48fc-8c59-28d3cbd8276d-kube-api-access-qscpf\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.536775 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c90b2280-0314-4b8a-979f-d678ee9a4a98" (UID: "c90b2280-0314-4b8a-979f-d678ee9a4a98"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.536798 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c90b2280-0314-4b8a-979f-d678ee9a4a98" (UID: "c90b2280-0314-4b8a-979f-d678ee9a4a98"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.536881 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c90b2280-0314-4b8a-979f-d678ee9a4a98" (UID: "c90b2280-0314-4b8a-979f-d678ee9a4a98"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.536888 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c90b2280-0314-4b8a-979f-d678ee9a4a98-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c90b2280-0314-4b8a-979f-d678ee9a4a98" (UID: "c90b2280-0314-4b8a-979f-d678ee9a4a98"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.537527 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c90b2280-0314-4b8a-979f-d678ee9a4a98" (UID: "c90b2280-0314-4b8a-979f-d678ee9a4a98"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.542015 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c90b2280-0314-4b8a-979f-d678ee9a4a98" (UID: "c90b2280-0314-4b8a-979f-d678ee9a4a98"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.542023 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c90b2280-0314-4b8a-979f-d678ee9a4a98-kube-api-access-8vjqf" (OuterVolumeSpecName: "kube-api-access-8vjqf") pod "c90b2280-0314-4b8a-979f-d678ee9a4a98" (UID: "c90b2280-0314-4b8a-979f-d678ee9a4a98"). InnerVolumeSpecName "kube-api-access-8vjqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.542207 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c90b2280-0314-4b8a-979f-d678ee9a4a98" (UID: "c90b2280-0314-4b8a-979f-d678ee9a4a98"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.542725 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c90b2280-0314-4b8a-979f-d678ee9a4a98" (UID: "c90b2280-0314-4b8a-979f-d678ee9a4a98"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.542887 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c90b2280-0314-4b8a-979f-d678ee9a4a98" (UID: "c90b2280-0314-4b8a-979f-d678ee9a4a98"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.543145 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c90b2280-0314-4b8a-979f-d678ee9a4a98" (UID: "c90b2280-0314-4b8a-979f-d678ee9a4a98"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.543229 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c90b2280-0314-4b8a-979f-d678ee9a4a98" (UID: "c90b2280-0314-4b8a-979f-d678ee9a4a98"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.542275 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c90b2280-0314-4b8a-979f-d678ee9a4a98" (UID: "c90b2280-0314-4b8a-979f-d678ee9a4a98"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.549700 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c90b2280-0314-4b8a-979f-d678ee9a4a98" (UID: "c90b2280-0314-4b8a-979f-d678ee9a4a98"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.637884 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-user-template-login\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.638160 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-session\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.638320 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.638492 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.638609 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-user-template-error\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.638734 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.638859 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.638964 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.639123 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.639229 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7b95734a-ba78-48fc-8c59-28d3cbd8276d-audit-policies\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.639357 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qscpf\" (UniqueName: \"kubernetes.io/projected/7b95734a-ba78-48fc-8c59-28d3cbd8276d-kube-api-access-qscpf\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.639490 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.639609 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7b95734a-ba78-48fc-8c59-28d3cbd8276d-audit-dir\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.639717 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.639871 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.640249 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.640378 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.640494 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.640602 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.640710 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.639977 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7b95734a-ba78-48fc-8c59-28d3cbd8276d-audit-policies\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.639629 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.640609 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.639955 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7b95734a-ba78-48fc-8c59-28d3cbd8276d-audit-dir\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.640801 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.641014 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.641040 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.641056 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.641072 4953 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c90b2280-0314-4b8a-979f-d678ee9a4a98-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.640498 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.641088 4953 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c90b2280-0314-4b8a-979f-d678ee9a4a98-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.641105 4953 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c90b2280-0314-4b8a-979f-d678ee9a4a98-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.641121 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vjqf\" (UniqueName: \"kubernetes.io/projected/c90b2280-0314-4b8a-979f-d678ee9a4a98-kube-api-access-8vjqf\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.641962 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-session\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.642073 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.642076 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-user-template-login\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.642916 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.643662 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.643745 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.644101 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-user-template-error\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.644669 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7b95734a-ba78-48fc-8c59-28d3cbd8276d-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.654483 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qscpf\" (UniqueName: \"kubernetes.io/projected/7b95734a-ba78-48fc-8c59-28d3cbd8276d-kube-api-access-qscpf\") pod \"oauth-openshift-7b49777cd7-krtjr\" (UID: \"7b95734a-ba78-48fc-8c59-28d3cbd8276d\") " pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.730157 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.932008 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b49777cd7-krtjr"] Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.996146 4953 generic.go:334] "Generic (PLEG): container finished" podID="c90b2280-0314-4b8a-979f-d678ee9a4a98" containerID="b0e05d6d031ba8d913cbfa5746fff8f9f43db64b9a742d3863072a48de4b0741" exitCode=0 Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.996270 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" event={"ID":"c90b2280-0314-4b8a-979f-d678ee9a4a98","Type":"ContainerDied","Data":"b0e05d6d031ba8d913cbfa5746fff8f9f43db64b9a742d3863072a48de4b0741"} Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.996277 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.996347 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sb7m5" event={"ID":"c90b2280-0314-4b8a-979f-d678ee9a4a98","Type":"ContainerDied","Data":"5dfb7cbae51a698008d4e4cdaca99e67c6a87a12cbbbe3a8fa1359320daf2bf6"} Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.996369 4953 scope.go:117] "RemoveContainer" containerID="b0e05d6d031ba8d913cbfa5746fff8f9f43db64b9a742d3863072a48de4b0741" Feb 23 00:10:19 crc kubenswrapper[4953]: I0223 00:10:19.998847 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" event={"ID":"7b95734a-ba78-48fc-8c59-28d3cbd8276d","Type":"ContainerStarted","Data":"c4db2ed183744c0749ab75991ac7f712a0ca4fe47db10765051417f81676a0e6"} Feb 23 00:10:20 crc kubenswrapper[4953]: I0223 00:10:20.021745 4953 scope.go:117] "RemoveContainer" containerID="b0e05d6d031ba8d913cbfa5746fff8f9f43db64b9a742d3863072a48de4b0741" Feb 23 00:10:20 crc kubenswrapper[4953]: E0223 00:10:20.022318 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0e05d6d031ba8d913cbfa5746fff8f9f43db64b9a742d3863072a48de4b0741\": container with ID starting with b0e05d6d031ba8d913cbfa5746fff8f9f43db64b9a742d3863072a48de4b0741 not found: ID does not exist" containerID="b0e05d6d031ba8d913cbfa5746fff8f9f43db64b9a742d3863072a48de4b0741" Feb 23 00:10:20 crc kubenswrapper[4953]: I0223 00:10:20.022356 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0e05d6d031ba8d913cbfa5746fff8f9f43db64b9a742d3863072a48de4b0741"} err="failed to get container status \"b0e05d6d031ba8d913cbfa5746fff8f9f43db64b9a742d3863072a48de4b0741\": rpc error: code = NotFound desc = could not find container \"b0e05d6d031ba8d913cbfa5746fff8f9f43db64b9a742d3863072a48de4b0741\": container with ID starting with b0e05d6d031ba8d913cbfa5746fff8f9f43db64b9a742d3863072a48de4b0741 not found: ID does not exist" Feb 23 00:10:20 crc kubenswrapper[4953]: I0223 00:10:20.044463 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sb7m5"] Feb 23 00:10:20 crc kubenswrapper[4953]: I0223 00:10:20.046875 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sb7m5"] Feb 23 00:10:21 crc kubenswrapper[4953]: I0223 00:10:21.008655 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" event={"ID":"7b95734a-ba78-48fc-8c59-28d3cbd8276d","Type":"ContainerStarted","Data":"053077487c73b57dddf12722d5fdc37c16a3baf10e619401eb2b7c851fa97285"} Feb 23 00:10:21 crc kubenswrapper[4953]: I0223 00:10:21.009023 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:21 crc kubenswrapper[4953]: I0223 00:10:21.017168 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" Feb 23 00:10:21 crc kubenswrapper[4953]: I0223 00:10:21.041232 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7b49777cd7-krtjr" podStartSLOduration=27.041208306 podStartE2EDuration="27.041208306s" podCreationTimestamp="2026-02-23 00:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:10:21.036433499 +0000 UTC m=+218.970275355" watchObservedRunningTime="2026-02-23 00:10:21.041208306 +0000 UTC m=+218.975050162" Feb 23 00:10:21 crc kubenswrapper[4953]: I0223 00:10:21.331608 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c90b2280-0314-4b8a-979f-d678ee9a4a98" path="/var/lib/kubelet/pods/c90b2280-0314-4b8a-979f-d678ee9a4a98/volumes" Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.346661 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-65pkm"] Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.347747 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-65pkm" podUID="0033d07e-7400-4307-89d8-efc2e34acee5" containerName="registry-server" containerID="cri-o://6f0f7bd73e65d8ce905aeae1edb3eac29e7e3679dd1c5a87ff23298839e1a78c" gracePeriod=30 Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.350760 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-44v2s"] Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.350976 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-44v2s" podUID="b82e6868-6070-4c8b-9564-c7f0ae98c951" containerName="registry-server" containerID="cri-o://cd4e8ab3408bd485cc7fb2f2e88cf6a43f277ee7347884348235f735c705bd1d" gracePeriod=30 Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.372227 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pjdts"] Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.372478 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" podUID="69abc20a-54cb-47c6-884d-e12fd1984fdb" containerName="marketplace-operator" containerID="cri-o://a9aeffa26217ec9e8a9cc24f0ff4e153cedea183bfaa19dfaf30fc8355ec0bc5" gracePeriod=30 Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.387013 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7fhk"] Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.387217 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l7fhk" podUID="aec579bf-d684-4ab1-a91c-366365920404" containerName="registry-server" containerID="cri-o://26fa4c19f6ed79d4aa35a7d22f1ba3e5c67f69051bd72d04d3b7abb3a88005b1" gracePeriod=30 Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.401118 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pp2fr"] Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.401397 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pp2fr" podUID="8b304dea-fc9b-4fc9-ae6a-6b8377e36578" containerName="registry-server" containerID="cri-o://1c8263f8b8fbd48e2dd098a38dfa9c6b1a73ba7ff8d510d98b24d04db7b52586" gracePeriod=30 Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.408158 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xn8m6"] Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.408796 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xn8m6" Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.427349 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xn8m6"] Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.508729 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-pp2fr" podUID="8b304dea-fc9b-4fc9-ae6a-6b8377e36578" containerName="registry-server" probeResult="failure" output="" Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.518355 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-pp2fr" podUID="8b304dea-fc9b-4fc9-ae6a-6b8377e36578" containerName="registry-server" probeResult="failure" output="" Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.549445 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e6fcf4dd-f162-4b92-82b2-98bf669fd3f2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xn8m6\" (UID: \"e6fcf4dd-f162-4b92-82b2-98bf669fd3f2\") " pod="openshift-marketplace/marketplace-operator-79b997595-xn8m6" Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.549493 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7fpk\" (UniqueName: \"kubernetes.io/projected/e6fcf4dd-f162-4b92-82b2-98bf669fd3f2-kube-api-access-p7fpk\") pod \"marketplace-operator-79b997595-xn8m6\" (UID: \"e6fcf4dd-f162-4b92-82b2-98bf669fd3f2\") " pod="openshift-marketplace/marketplace-operator-79b997595-xn8m6" Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.549515 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6fcf4dd-f162-4b92-82b2-98bf669fd3f2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xn8m6\" (UID: \"e6fcf4dd-f162-4b92-82b2-98bf669fd3f2\") " pod="openshift-marketplace/marketplace-operator-79b997595-xn8m6" Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.650542 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e6fcf4dd-f162-4b92-82b2-98bf669fd3f2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xn8m6\" (UID: \"e6fcf4dd-f162-4b92-82b2-98bf669fd3f2\") " pod="openshift-marketplace/marketplace-operator-79b997595-xn8m6" Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.650582 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7fpk\" (UniqueName: \"kubernetes.io/projected/e6fcf4dd-f162-4b92-82b2-98bf669fd3f2-kube-api-access-p7fpk\") pod \"marketplace-operator-79b997595-xn8m6\" (UID: \"e6fcf4dd-f162-4b92-82b2-98bf669fd3f2\") " pod="openshift-marketplace/marketplace-operator-79b997595-xn8m6" Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.650604 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6fcf4dd-f162-4b92-82b2-98bf669fd3f2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xn8m6\" (UID: \"e6fcf4dd-f162-4b92-82b2-98bf669fd3f2\") " pod="openshift-marketplace/marketplace-operator-79b997595-xn8m6" Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.652006 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6fcf4dd-f162-4b92-82b2-98bf669fd3f2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xn8m6\" (UID: \"e6fcf4dd-f162-4b92-82b2-98bf669fd3f2\") " pod="openshift-marketplace/marketplace-operator-79b997595-xn8m6" Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.660626 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e6fcf4dd-f162-4b92-82b2-98bf669fd3f2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xn8m6\" (UID: \"e6fcf4dd-f162-4b92-82b2-98bf669fd3f2\") " pod="openshift-marketplace/marketplace-operator-79b997595-xn8m6" Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.671810 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7fpk\" (UniqueName: \"kubernetes.io/projected/e6fcf4dd-f162-4b92-82b2-98bf669fd3f2-kube-api-access-p7fpk\") pod \"marketplace-operator-79b997595-xn8m6\" (UID: \"e6fcf4dd-f162-4b92-82b2-98bf669fd3f2\") " pod="openshift-marketplace/marketplace-operator-79b997595-xn8m6" Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.777400 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xn8m6" Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.879824 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7fhk" Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.891582 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-44v2s" Feb 23 00:10:34 crc kubenswrapper[4953]: I0223 00:10:34.894294 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pp2fr" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.054715 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b82e6868-6070-4c8b-9564-c7f0ae98c951-utilities\") pod \"b82e6868-6070-4c8b-9564-c7f0ae98c951\" (UID: \"b82e6868-6070-4c8b-9564-c7f0ae98c951\") " Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.054763 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wn22\" (UniqueName: \"kubernetes.io/projected/8b304dea-fc9b-4fc9-ae6a-6b8377e36578-kube-api-access-9wn22\") pod \"8b304dea-fc9b-4fc9-ae6a-6b8377e36578\" (UID: \"8b304dea-fc9b-4fc9-ae6a-6b8377e36578\") " Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.054785 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec579bf-d684-4ab1-a91c-366365920404-catalog-content\") pod \"aec579bf-d684-4ab1-a91c-366365920404\" (UID: \"aec579bf-d684-4ab1-a91c-366365920404\") " Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.054859 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec579bf-d684-4ab1-a91c-366365920404-utilities\") pod \"aec579bf-d684-4ab1-a91c-366365920404\" (UID: \"aec579bf-d684-4ab1-a91c-366365920404\") " Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.054908 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pch4l\" (UniqueName: \"kubernetes.io/projected/aec579bf-d684-4ab1-a91c-366365920404-kube-api-access-pch4l\") pod \"aec579bf-d684-4ab1-a91c-366365920404\" (UID: \"aec579bf-d684-4ab1-a91c-366365920404\") " Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.054934 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b304dea-fc9b-4fc9-ae6a-6b8377e36578-catalog-content\") pod \"8b304dea-fc9b-4fc9-ae6a-6b8377e36578\" (UID: \"8b304dea-fc9b-4fc9-ae6a-6b8377e36578\") " Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.054980 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncxvp\" (UniqueName: \"kubernetes.io/projected/b82e6868-6070-4c8b-9564-c7f0ae98c951-kube-api-access-ncxvp\") pod \"b82e6868-6070-4c8b-9564-c7f0ae98c951\" (UID: \"b82e6868-6070-4c8b-9564-c7f0ae98c951\") " Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.055003 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b82e6868-6070-4c8b-9564-c7f0ae98c951-catalog-content\") pod \"b82e6868-6070-4c8b-9564-c7f0ae98c951\" (UID: \"b82e6868-6070-4c8b-9564-c7f0ae98c951\") " Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.055019 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b304dea-fc9b-4fc9-ae6a-6b8377e36578-utilities\") pod \"8b304dea-fc9b-4fc9-ae6a-6b8377e36578\" (UID: \"8b304dea-fc9b-4fc9-ae6a-6b8377e36578\") " Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.055767 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b82e6868-6070-4c8b-9564-c7f0ae98c951-utilities" (OuterVolumeSpecName: "utilities") pod "b82e6868-6070-4c8b-9564-c7f0ae98c951" (UID: "b82e6868-6070-4c8b-9564-c7f0ae98c951"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.055944 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b304dea-fc9b-4fc9-ae6a-6b8377e36578-utilities" (OuterVolumeSpecName: "utilities") pod "8b304dea-fc9b-4fc9-ae6a-6b8377e36578" (UID: "8b304dea-fc9b-4fc9-ae6a-6b8377e36578"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.057719 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec579bf-d684-4ab1-a91c-366365920404-utilities" (OuterVolumeSpecName: "utilities") pod "aec579bf-d684-4ab1-a91c-366365920404" (UID: "aec579bf-d684-4ab1-a91c-366365920404"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.061666 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec579bf-d684-4ab1-a91c-366365920404-kube-api-access-pch4l" (OuterVolumeSpecName: "kube-api-access-pch4l") pod "aec579bf-d684-4ab1-a91c-366365920404" (UID: "aec579bf-d684-4ab1-a91c-366365920404"). InnerVolumeSpecName "kube-api-access-pch4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.062243 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b304dea-fc9b-4fc9-ae6a-6b8377e36578-kube-api-access-9wn22" (OuterVolumeSpecName: "kube-api-access-9wn22") pod "8b304dea-fc9b-4fc9-ae6a-6b8377e36578" (UID: "8b304dea-fc9b-4fc9-ae6a-6b8377e36578"). InnerVolumeSpecName "kube-api-access-9wn22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.066973 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b82e6868-6070-4c8b-9564-c7f0ae98c951-kube-api-access-ncxvp" (OuterVolumeSpecName: "kube-api-access-ncxvp") pod "b82e6868-6070-4c8b-9564-c7f0ae98c951" (UID: "b82e6868-6070-4c8b-9564-c7f0ae98c951"). InnerVolumeSpecName "kube-api-access-ncxvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.082287 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec579bf-d684-4ab1-a91c-366365920404-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aec579bf-d684-4ab1-a91c-366365920404" (UID: "aec579bf-d684-4ab1-a91c-366365920404"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.096251 4953 generic.go:334] "Generic (PLEG): container finished" podID="69abc20a-54cb-47c6-884d-e12fd1984fdb" containerID="a9aeffa26217ec9e8a9cc24f0ff4e153cedea183bfaa19dfaf30fc8355ec0bc5" exitCode=0 Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.096352 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" event={"ID":"69abc20a-54cb-47c6-884d-e12fd1984fdb","Type":"ContainerDied","Data":"a9aeffa26217ec9e8a9cc24f0ff4e153cedea183bfaa19dfaf30fc8355ec0bc5"} Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.100749 4953 generic.go:334] "Generic (PLEG): container finished" podID="aec579bf-d684-4ab1-a91c-366365920404" containerID="26fa4c19f6ed79d4aa35a7d22f1ba3e5c67f69051bd72d04d3b7abb3a88005b1" exitCode=0 Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.100810 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7fhk" event={"ID":"aec579bf-d684-4ab1-a91c-366365920404","Type":"ContainerDied","Data":"26fa4c19f6ed79d4aa35a7d22f1ba3e5c67f69051bd72d04d3b7abb3a88005b1"} Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.100852 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7fhk" event={"ID":"aec579bf-d684-4ab1-a91c-366365920404","Type":"ContainerDied","Data":"d094bfe3bf464e56481491c77b33d8dfe923903ba87d1cd505c4add7bd43f81d"} Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.100953 4953 scope.go:117] "RemoveContainer" containerID="26fa4c19f6ed79d4aa35a7d22f1ba3e5c67f69051bd72d04d3b7abb3a88005b1" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.101162 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7fhk" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.105556 4953 generic.go:334] "Generic (PLEG): container finished" podID="b82e6868-6070-4c8b-9564-c7f0ae98c951" containerID="cd4e8ab3408bd485cc7fb2f2e88cf6a43f277ee7347884348235f735c705bd1d" exitCode=0 Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.105597 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44v2s" event={"ID":"b82e6868-6070-4c8b-9564-c7f0ae98c951","Type":"ContainerDied","Data":"cd4e8ab3408bd485cc7fb2f2e88cf6a43f277ee7347884348235f735c705bd1d"} Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.105650 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44v2s" event={"ID":"b82e6868-6070-4c8b-9564-c7f0ae98c951","Type":"ContainerDied","Data":"106af232ddca2a8e1acf0e6f25ad6cb3426470d4fc23b6ba1d098a7b894eeeb6"} Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.105616 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-44v2s" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.124185 4953 generic.go:334] "Generic (PLEG): container finished" podID="8b304dea-fc9b-4fc9-ae6a-6b8377e36578" containerID="1c8263f8b8fbd48e2dd098a38dfa9c6b1a73ba7ff8d510d98b24d04db7b52586" exitCode=0 Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.124330 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pp2fr" event={"ID":"8b304dea-fc9b-4fc9-ae6a-6b8377e36578","Type":"ContainerDied","Data":"1c8263f8b8fbd48e2dd098a38dfa9c6b1a73ba7ff8d510d98b24d04db7b52586"} Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.124387 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pp2fr" event={"ID":"8b304dea-fc9b-4fc9-ae6a-6b8377e36578","Type":"ContainerDied","Data":"77fed5202a0fbab6f1ab4d439c5283fc7dbd6446c61163ba95e5329a70c77445"} Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.124699 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pp2fr" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.137424 4953 generic.go:334] "Generic (PLEG): container finished" podID="0033d07e-7400-4307-89d8-efc2e34acee5" containerID="6f0f7bd73e65d8ce905aeae1edb3eac29e7e3679dd1c5a87ff23298839e1a78c" exitCode=0 Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.137465 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65pkm" event={"ID":"0033d07e-7400-4307-89d8-efc2e34acee5","Type":"ContainerDied","Data":"6f0f7bd73e65d8ce905aeae1edb3eac29e7e3679dd1c5a87ff23298839e1a78c"} Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.140159 4953 scope.go:117] "RemoveContainer" containerID="eec5e9e7ecbcecfb9dd8c2781eac5629f3976ba990e234f1ad5883130e3fdb80" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.151891 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7fhk"] Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.154034 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7fhk"] Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.154559 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b82e6868-6070-4c8b-9564-c7f0ae98c951-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b82e6868-6070-4c8b-9564-c7f0ae98c951" (UID: "b82e6868-6070-4c8b-9564-c7f0ae98c951"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.156747 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pch4l\" (UniqueName: \"kubernetes.io/projected/aec579bf-d684-4ab1-a91c-366365920404-kube-api-access-pch4l\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.156769 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncxvp\" (UniqueName: \"kubernetes.io/projected/b82e6868-6070-4c8b-9564-c7f0ae98c951-kube-api-access-ncxvp\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.156782 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b82e6868-6070-4c8b-9564-c7f0ae98c951-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.156793 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b304dea-fc9b-4fc9-ae6a-6b8377e36578-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.156805 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b82e6868-6070-4c8b-9564-c7f0ae98c951-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.156816 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec579bf-d684-4ab1-a91c-366365920404-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.156828 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wn22\" (UniqueName: \"kubernetes.io/projected/8b304dea-fc9b-4fc9-ae6a-6b8377e36578-kube-api-access-9wn22\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.156839 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec579bf-d684-4ab1-a91c-366365920404-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.169728 4953 scope.go:117] "RemoveContainer" containerID="9671adb1833e91aa36b356a244171b894e69c03113c673c44d01fb7f5307ff55" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.197677 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65pkm" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.203040 4953 scope.go:117] "RemoveContainer" containerID="26fa4c19f6ed79d4aa35a7d22f1ba3e5c67f69051bd72d04d3b7abb3a88005b1" Feb 23 00:10:35 crc kubenswrapper[4953]: E0223 00:10:35.213623 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26fa4c19f6ed79d4aa35a7d22f1ba3e5c67f69051bd72d04d3b7abb3a88005b1\": container with ID starting with 26fa4c19f6ed79d4aa35a7d22f1ba3e5c67f69051bd72d04d3b7abb3a88005b1 not found: ID does not exist" containerID="26fa4c19f6ed79d4aa35a7d22f1ba3e5c67f69051bd72d04d3b7abb3a88005b1" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.213799 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26fa4c19f6ed79d4aa35a7d22f1ba3e5c67f69051bd72d04d3b7abb3a88005b1"} err="failed to get container status \"26fa4c19f6ed79d4aa35a7d22f1ba3e5c67f69051bd72d04d3b7abb3a88005b1\": rpc error: code = NotFound desc = could not find container \"26fa4c19f6ed79d4aa35a7d22f1ba3e5c67f69051bd72d04d3b7abb3a88005b1\": container with ID starting with 26fa4c19f6ed79d4aa35a7d22f1ba3e5c67f69051bd72d04d3b7abb3a88005b1 not found: ID does not exist" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.213827 4953 scope.go:117] "RemoveContainer" containerID="eec5e9e7ecbcecfb9dd8c2781eac5629f3976ba990e234f1ad5883130e3fdb80" Feb 23 00:10:35 crc kubenswrapper[4953]: E0223 00:10:35.214527 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eec5e9e7ecbcecfb9dd8c2781eac5629f3976ba990e234f1ad5883130e3fdb80\": container with ID starting with eec5e9e7ecbcecfb9dd8c2781eac5629f3976ba990e234f1ad5883130e3fdb80 not found: ID does not exist" containerID="eec5e9e7ecbcecfb9dd8c2781eac5629f3976ba990e234f1ad5883130e3fdb80" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.214544 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eec5e9e7ecbcecfb9dd8c2781eac5629f3976ba990e234f1ad5883130e3fdb80"} err="failed to get container status \"eec5e9e7ecbcecfb9dd8c2781eac5629f3976ba990e234f1ad5883130e3fdb80\": rpc error: code = NotFound desc = could not find container \"eec5e9e7ecbcecfb9dd8c2781eac5629f3976ba990e234f1ad5883130e3fdb80\": container with ID starting with eec5e9e7ecbcecfb9dd8c2781eac5629f3976ba990e234f1ad5883130e3fdb80 not found: ID does not exist" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.214558 4953 scope.go:117] "RemoveContainer" containerID="9671adb1833e91aa36b356a244171b894e69c03113c673c44d01fb7f5307ff55" Feb 23 00:10:35 crc kubenswrapper[4953]: E0223 00:10:35.215254 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9671adb1833e91aa36b356a244171b894e69c03113c673c44d01fb7f5307ff55\": container with ID starting with 9671adb1833e91aa36b356a244171b894e69c03113c673c44d01fb7f5307ff55 not found: ID does not exist" containerID="9671adb1833e91aa36b356a244171b894e69c03113c673c44d01fb7f5307ff55" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.215270 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9671adb1833e91aa36b356a244171b894e69c03113c673c44d01fb7f5307ff55"} err="failed to get container status \"9671adb1833e91aa36b356a244171b894e69c03113c673c44d01fb7f5307ff55\": rpc error: code = NotFound desc = could not find container \"9671adb1833e91aa36b356a244171b894e69c03113c673c44d01fb7f5307ff55\": container with ID starting with 9671adb1833e91aa36b356a244171b894e69c03113c673c44d01fb7f5307ff55 not found: ID does not exist" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.215281 4953 scope.go:117] "RemoveContainer" containerID="cd4e8ab3408bd485cc7fb2f2e88cf6a43f277ee7347884348235f735c705bd1d" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.217650 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xn8m6"] Feb 23 00:10:35 crc kubenswrapper[4953]: W0223 00:10:35.230496 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6fcf4dd_f162_4b92_82b2_98bf669fd3f2.slice/crio-0bc2f7576c3c011c5219c0e9693710ee8f6b65fa6bb9a46dea06e42d43648f88 WatchSource:0}: Error finding container 0bc2f7576c3c011c5219c0e9693710ee8f6b65fa6bb9a46dea06e42d43648f88: Status 404 returned error can't find the container with id 0bc2f7576c3c011c5219c0e9693710ee8f6b65fa6bb9a46dea06e42d43648f88 Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.232543 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.252124 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b304dea-fc9b-4fc9-ae6a-6b8377e36578-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b304dea-fc9b-4fc9-ae6a-6b8377e36578" (UID: "8b304dea-fc9b-4fc9-ae6a-6b8377e36578"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.257437 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b304dea-fc9b-4fc9-ae6a-6b8377e36578-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.263307 4953 scope.go:117] "RemoveContainer" containerID="7515888e136dbefcd4619e1c17570323939333f5128a09a2bb9fc09c5259221e" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.286733 4953 scope.go:117] "RemoveContainer" containerID="e495f190f847f390d71b438836db8776b870658649c4f332c7211f799d1771d5" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.308402 4953 scope.go:117] "RemoveContainer" containerID="cd4e8ab3408bd485cc7fb2f2e88cf6a43f277ee7347884348235f735c705bd1d" Feb 23 00:10:35 crc kubenswrapper[4953]: E0223 00:10:35.308924 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd4e8ab3408bd485cc7fb2f2e88cf6a43f277ee7347884348235f735c705bd1d\": container with ID starting with cd4e8ab3408bd485cc7fb2f2e88cf6a43f277ee7347884348235f735c705bd1d not found: ID does not exist" containerID="cd4e8ab3408bd485cc7fb2f2e88cf6a43f277ee7347884348235f735c705bd1d" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.308962 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd4e8ab3408bd485cc7fb2f2e88cf6a43f277ee7347884348235f735c705bd1d"} err="failed to get container status \"cd4e8ab3408bd485cc7fb2f2e88cf6a43f277ee7347884348235f735c705bd1d\": rpc error: code = NotFound desc = could not find container \"cd4e8ab3408bd485cc7fb2f2e88cf6a43f277ee7347884348235f735c705bd1d\": container with ID starting with cd4e8ab3408bd485cc7fb2f2e88cf6a43f277ee7347884348235f735c705bd1d not found: ID does not exist" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.308983 4953 scope.go:117] "RemoveContainer" containerID="7515888e136dbefcd4619e1c17570323939333f5128a09a2bb9fc09c5259221e" Feb 23 00:10:35 crc kubenswrapper[4953]: E0223 00:10:35.309369 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7515888e136dbefcd4619e1c17570323939333f5128a09a2bb9fc09c5259221e\": container with ID starting with 7515888e136dbefcd4619e1c17570323939333f5128a09a2bb9fc09c5259221e not found: ID does not exist" containerID="7515888e136dbefcd4619e1c17570323939333f5128a09a2bb9fc09c5259221e" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.309413 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7515888e136dbefcd4619e1c17570323939333f5128a09a2bb9fc09c5259221e"} err="failed to get container status \"7515888e136dbefcd4619e1c17570323939333f5128a09a2bb9fc09c5259221e\": rpc error: code = NotFound desc = could not find container \"7515888e136dbefcd4619e1c17570323939333f5128a09a2bb9fc09c5259221e\": container with ID starting with 7515888e136dbefcd4619e1c17570323939333f5128a09a2bb9fc09c5259221e not found: ID does not exist" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.309440 4953 scope.go:117] "RemoveContainer" containerID="e495f190f847f390d71b438836db8776b870658649c4f332c7211f799d1771d5" Feb 23 00:10:35 crc kubenswrapper[4953]: E0223 00:10:35.309714 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e495f190f847f390d71b438836db8776b870658649c4f332c7211f799d1771d5\": container with ID starting with e495f190f847f390d71b438836db8776b870658649c4f332c7211f799d1771d5 not found: ID does not exist" containerID="e495f190f847f390d71b438836db8776b870658649c4f332c7211f799d1771d5" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.309740 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e495f190f847f390d71b438836db8776b870658649c4f332c7211f799d1771d5"} err="failed to get container status \"e495f190f847f390d71b438836db8776b870658649c4f332c7211f799d1771d5\": rpc error: code = NotFound desc = could not find container \"e495f190f847f390d71b438836db8776b870658649c4f332c7211f799d1771d5\": container with ID starting with e495f190f847f390d71b438836db8776b870658649c4f332c7211f799d1771d5 not found: ID does not exist" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.309756 4953 scope.go:117] "RemoveContainer" containerID="1c8263f8b8fbd48e2dd098a38dfa9c6b1a73ba7ff8d510d98b24d04db7b52586" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.322411 4953 scope.go:117] "RemoveContainer" containerID="4ab8c04e6d64cae31dacd98f524e91e9c7eadc94d68a42981f8402f96685304b" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.334022 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec579bf-d684-4ab1-a91c-366365920404" path="/var/lib/kubelet/pods/aec579bf-d684-4ab1-a91c-366365920404/volumes" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.341426 4953 scope.go:117] "RemoveContainer" containerID="6cc286cbb00ea26049cd1e06b10e35522076717cc4692054a67919df776bb055" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.356801 4953 scope.go:117] "RemoveContainer" containerID="1c8263f8b8fbd48e2dd098a38dfa9c6b1a73ba7ff8d510d98b24d04db7b52586" Feb 23 00:10:35 crc kubenswrapper[4953]: E0223 00:10:35.357292 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c8263f8b8fbd48e2dd098a38dfa9c6b1a73ba7ff8d510d98b24d04db7b52586\": container with ID starting with 1c8263f8b8fbd48e2dd098a38dfa9c6b1a73ba7ff8d510d98b24d04db7b52586 not found: ID does not exist" containerID="1c8263f8b8fbd48e2dd098a38dfa9c6b1a73ba7ff8d510d98b24d04db7b52586" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.357342 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c8263f8b8fbd48e2dd098a38dfa9c6b1a73ba7ff8d510d98b24d04db7b52586"} err="failed to get container status \"1c8263f8b8fbd48e2dd098a38dfa9c6b1a73ba7ff8d510d98b24d04db7b52586\": rpc error: code = NotFound desc = could not find container \"1c8263f8b8fbd48e2dd098a38dfa9c6b1a73ba7ff8d510d98b24d04db7b52586\": container with ID starting with 1c8263f8b8fbd48e2dd098a38dfa9c6b1a73ba7ff8d510d98b24d04db7b52586 not found: ID does not exist" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.357370 4953 scope.go:117] "RemoveContainer" containerID="4ab8c04e6d64cae31dacd98f524e91e9c7eadc94d68a42981f8402f96685304b" Feb 23 00:10:35 crc kubenswrapper[4953]: E0223 00:10:35.357689 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ab8c04e6d64cae31dacd98f524e91e9c7eadc94d68a42981f8402f96685304b\": container with ID starting with 4ab8c04e6d64cae31dacd98f524e91e9c7eadc94d68a42981f8402f96685304b not found: ID does not exist" containerID="4ab8c04e6d64cae31dacd98f524e91e9c7eadc94d68a42981f8402f96685304b" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.357725 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ab8c04e6d64cae31dacd98f524e91e9c7eadc94d68a42981f8402f96685304b"} err="failed to get container status \"4ab8c04e6d64cae31dacd98f524e91e9c7eadc94d68a42981f8402f96685304b\": rpc error: code = NotFound desc = could not find container \"4ab8c04e6d64cae31dacd98f524e91e9c7eadc94d68a42981f8402f96685304b\": container with ID starting with 4ab8c04e6d64cae31dacd98f524e91e9c7eadc94d68a42981f8402f96685304b not found: ID does not exist" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.357751 4953 scope.go:117] "RemoveContainer" containerID="6cc286cbb00ea26049cd1e06b10e35522076717cc4692054a67919df776bb055" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.357769 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69abc20a-54cb-47c6-884d-e12fd1984fdb-marketplace-trusted-ca\") pod \"69abc20a-54cb-47c6-884d-e12fd1984fdb\" (UID: \"69abc20a-54cb-47c6-884d-e12fd1984fdb\") " Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.357790 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69abc20a-54cb-47c6-884d-e12fd1984fdb-marketplace-operator-metrics\") pod \"69abc20a-54cb-47c6-884d-e12fd1984fdb\" (UID: \"69abc20a-54cb-47c6-884d-e12fd1984fdb\") " Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.357824 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsv4b\" (UniqueName: \"kubernetes.io/projected/69abc20a-54cb-47c6-884d-e12fd1984fdb-kube-api-access-jsv4b\") pod \"69abc20a-54cb-47c6-884d-e12fd1984fdb\" (UID: \"69abc20a-54cb-47c6-884d-e12fd1984fdb\") " Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.357843 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk9sp\" (UniqueName: \"kubernetes.io/projected/0033d07e-7400-4307-89d8-efc2e34acee5-kube-api-access-hk9sp\") pod \"0033d07e-7400-4307-89d8-efc2e34acee5\" (UID: \"0033d07e-7400-4307-89d8-efc2e34acee5\") " Feb 23 00:10:35 crc kubenswrapper[4953]: E0223 00:10:35.358019 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cc286cbb00ea26049cd1e06b10e35522076717cc4692054a67919df776bb055\": container with ID starting with 6cc286cbb00ea26049cd1e06b10e35522076717cc4692054a67919df776bb055 not found: ID does not exist" containerID="6cc286cbb00ea26049cd1e06b10e35522076717cc4692054a67919df776bb055" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.358040 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cc286cbb00ea26049cd1e06b10e35522076717cc4692054a67919df776bb055"} err="failed to get container status \"6cc286cbb00ea26049cd1e06b10e35522076717cc4692054a67919df776bb055\": rpc error: code = NotFound desc = could not find container \"6cc286cbb00ea26049cd1e06b10e35522076717cc4692054a67919df776bb055\": container with ID starting with 6cc286cbb00ea26049cd1e06b10e35522076717cc4692054a67919df776bb055 not found: ID does not exist" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.358428 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0033d07e-7400-4307-89d8-efc2e34acee5-utilities\") pod \"0033d07e-7400-4307-89d8-efc2e34acee5\" (UID: \"0033d07e-7400-4307-89d8-efc2e34acee5\") " Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.358488 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0033d07e-7400-4307-89d8-efc2e34acee5-catalog-content\") pod \"0033d07e-7400-4307-89d8-efc2e34acee5\" (UID: \"0033d07e-7400-4307-89d8-efc2e34acee5\") " Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.358843 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69abc20a-54cb-47c6-884d-e12fd1984fdb-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "69abc20a-54cb-47c6-884d-e12fd1984fdb" (UID: "69abc20a-54cb-47c6-884d-e12fd1984fdb"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.358929 4953 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69abc20a-54cb-47c6-884d-e12fd1984fdb-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.359068 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0033d07e-7400-4307-89d8-efc2e34acee5-utilities" (OuterVolumeSpecName: "utilities") pod "0033d07e-7400-4307-89d8-efc2e34acee5" (UID: "0033d07e-7400-4307-89d8-efc2e34acee5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.364606 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0033d07e-7400-4307-89d8-efc2e34acee5-kube-api-access-hk9sp" (OuterVolumeSpecName: "kube-api-access-hk9sp") pod "0033d07e-7400-4307-89d8-efc2e34acee5" (UID: "0033d07e-7400-4307-89d8-efc2e34acee5"). InnerVolumeSpecName "kube-api-access-hk9sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.365231 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69abc20a-54cb-47c6-884d-e12fd1984fdb-kube-api-access-jsv4b" (OuterVolumeSpecName: "kube-api-access-jsv4b") pod "69abc20a-54cb-47c6-884d-e12fd1984fdb" (UID: "69abc20a-54cb-47c6-884d-e12fd1984fdb"). InnerVolumeSpecName "kube-api-access-jsv4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.371637 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69abc20a-54cb-47c6-884d-e12fd1984fdb-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "69abc20a-54cb-47c6-884d-e12fd1984fdb" (UID: "69abc20a-54cb-47c6-884d-e12fd1984fdb"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.413694 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0033d07e-7400-4307-89d8-efc2e34acee5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0033d07e-7400-4307-89d8-efc2e34acee5" (UID: "0033d07e-7400-4307-89d8-efc2e34acee5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.422315 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-44v2s"] Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.426182 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-44v2s"] Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.441653 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pp2fr"] Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.446711 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pp2fr"] Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.460274 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0033d07e-7400-4307-89d8-efc2e34acee5-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.460308 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0033d07e-7400-4307-89d8-efc2e34acee5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.460320 4953 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69abc20a-54cb-47c6-884d-e12fd1984fdb-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.460334 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsv4b\" (UniqueName: \"kubernetes.io/projected/69abc20a-54cb-47c6-884d-e12fd1984fdb-kube-api-access-jsv4b\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:35 crc kubenswrapper[4953]: I0223 00:10:35.460343 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk9sp\" (UniqueName: \"kubernetes.io/projected/0033d07e-7400-4307-89d8-efc2e34acee5-kube-api-access-hk9sp\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.145167 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xn8m6" event={"ID":"e6fcf4dd-f162-4b92-82b2-98bf669fd3f2","Type":"ContainerStarted","Data":"38b1468c3df9d5ffa9cf0269b507e76cf6235610bdd7eaaf87ede7887245f1e7"} Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.146032 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xn8m6" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.146128 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xn8m6" event={"ID":"e6fcf4dd-f162-4b92-82b2-98bf669fd3f2","Type":"ContainerStarted","Data":"0bc2f7576c3c011c5219c0e9693710ee8f6b65fa6bb9a46dea06e42d43648f88"} Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.150032 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65pkm" event={"ID":"0033d07e-7400-4307-89d8-efc2e34acee5","Type":"ContainerDied","Data":"0c3cc4a26772079614dce3804e59346da95853edb5cf9af21202be37625bdbe4"} Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.150080 4953 scope.go:117] "RemoveContainer" containerID="6f0f7bd73e65d8ce905aeae1edb3eac29e7e3679dd1c5a87ff23298839e1a78c" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.150105 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65pkm" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.153053 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" event={"ID":"69abc20a-54cb-47c6-884d-e12fd1984fdb","Type":"ContainerDied","Data":"a3c2d3de6b5af9c7af76944d6b004c0d4ce3610f1ad0f6740681b25e530674f2"} Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.153224 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pjdts" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.154542 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xn8m6" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.172598 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xn8m6" podStartSLOduration=2.172567361 podStartE2EDuration="2.172567361s" podCreationTimestamp="2026-02-23 00:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:10:36.165483122 +0000 UTC m=+234.099324968" watchObservedRunningTime="2026-02-23 00:10:36.172567361 +0000 UTC m=+234.106409237" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.181820 4953 scope.go:117] "RemoveContainer" containerID="81965963f5e67588e9731bd2acdc03249d719e6a9c021d20acd6f4f6185ea651" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.216396 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pjdts"] Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.219602 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pjdts"] Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.221610 4953 scope.go:117] "RemoveContainer" containerID="e4834df52118b1f23e3aaaaecb593ecf6827bff48f27a3bfea4874a6345268bf" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.238178 4953 scope.go:117] "RemoveContainer" containerID="a9aeffa26217ec9e8a9cc24f0ff4e153cedea183bfaa19dfaf30fc8355ec0bc5" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.238436 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-65pkm"] Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.242696 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-65pkm"] Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.559134 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zmzrz"] Feb 23 00:10:36 crc kubenswrapper[4953]: E0223 00:10:36.559344 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69abc20a-54cb-47c6-884d-e12fd1984fdb" containerName="marketplace-operator" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.559358 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="69abc20a-54cb-47c6-884d-e12fd1984fdb" containerName="marketplace-operator" Feb 23 00:10:36 crc kubenswrapper[4953]: E0223 00:10:36.559366 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec579bf-d684-4ab1-a91c-366365920404" containerName="registry-server" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.559371 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec579bf-d684-4ab1-a91c-366365920404" containerName="registry-server" Feb 23 00:10:36 crc kubenswrapper[4953]: E0223 00:10:36.559382 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b304dea-fc9b-4fc9-ae6a-6b8377e36578" containerName="extract-utilities" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.559388 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b304dea-fc9b-4fc9-ae6a-6b8377e36578" containerName="extract-utilities" Feb 23 00:10:36 crc kubenswrapper[4953]: E0223 00:10:36.559396 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b304dea-fc9b-4fc9-ae6a-6b8377e36578" containerName="registry-server" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.559402 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b304dea-fc9b-4fc9-ae6a-6b8377e36578" containerName="registry-server" Feb 23 00:10:36 crc kubenswrapper[4953]: E0223 00:10:36.559410 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0033d07e-7400-4307-89d8-efc2e34acee5" containerName="registry-server" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.559416 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="0033d07e-7400-4307-89d8-efc2e34acee5" containerName="registry-server" Feb 23 00:10:36 crc kubenswrapper[4953]: E0223 00:10:36.559426 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec579bf-d684-4ab1-a91c-366365920404" containerName="extract-content" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.559432 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec579bf-d684-4ab1-a91c-366365920404" containerName="extract-content" Feb 23 00:10:36 crc kubenswrapper[4953]: E0223 00:10:36.559443 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82e6868-6070-4c8b-9564-c7f0ae98c951" containerName="extract-utilities" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.559448 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82e6868-6070-4c8b-9564-c7f0ae98c951" containerName="extract-utilities" Feb 23 00:10:36 crc kubenswrapper[4953]: E0223 00:10:36.559456 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82e6868-6070-4c8b-9564-c7f0ae98c951" containerName="registry-server" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.559463 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82e6868-6070-4c8b-9564-c7f0ae98c951" containerName="registry-server" Feb 23 00:10:36 crc kubenswrapper[4953]: E0223 00:10:36.559471 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec579bf-d684-4ab1-a91c-366365920404" containerName="extract-utilities" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.559477 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec579bf-d684-4ab1-a91c-366365920404" containerName="extract-utilities" Feb 23 00:10:36 crc kubenswrapper[4953]: E0223 00:10:36.559487 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0033d07e-7400-4307-89d8-efc2e34acee5" containerName="extract-utilities" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.559492 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="0033d07e-7400-4307-89d8-efc2e34acee5" containerName="extract-utilities" Feb 23 00:10:36 crc kubenswrapper[4953]: E0223 00:10:36.559500 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0033d07e-7400-4307-89d8-efc2e34acee5" containerName="extract-content" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.559505 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="0033d07e-7400-4307-89d8-efc2e34acee5" containerName="extract-content" Feb 23 00:10:36 crc kubenswrapper[4953]: E0223 00:10:36.559511 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82e6868-6070-4c8b-9564-c7f0ae98c951" containerName="extract-content" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.559517 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82e6868-6070-4c8b-9564-c7f0ae98c951" containerName="extract-content" Feb 23 00:10:36 crc kubenswrapper[4953]: E0223 00:10:36.559525 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b304dea-fc9b-4fc9-ae6a-6b8377e36578" containerName="extract-content" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.559532 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b304dea-fc9b-4fc9-ae6a-6b8377e36578" containerName="extract-content" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.559626 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="0033d07e-7400-4307-89d8-efc2e34acee5" containerName="registry-server" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.559637 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="69abc20a-54cb-47c6-884d-e12fd1984fdb" containerName="marketplace-operator" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.559644 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82e6868-6070-4c8b-9564-c7f0ae98c951" containerName="registry-server" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.559656 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec579bf-d684-4ab1-a91c-366365920404" containerName="registry-server" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.559666 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b304dea-fc9b-4fc9-ae6a-6b8377e36578" containerName="registry-server" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.560286 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmzrz" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.563697 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.572926 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmzrz"] Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.675051 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9jqw\" (UniqueName: \"kubernetes.io/projected/3ff8b616-423d-4b7f-9fb8-2ac91fbef324-kube-api-access-k9jqw\") pod \"redhat-marketplace-zmzrz\" (UID: \"3ff8b616-423d-4b7f-9fb8-2ac91fbef324\") " pod="openshift-marketplace/redhat-marketplace-zmzrz" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.675100 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff8b616-423d-4b7f-9fb8-2ac91fbef324-catalog-content\") pod \"redhat-marketplace-zmzrz\" (UID: \"3ff8b616-423d-4b7f-9fb8-2ac91fbef324\") " pod="openshift-marketplace/redhat-marketplace-zmzrz" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.675131 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff8b616-423d-4b7f-9fb8-2ac91fbef324-utilities\") pod \"redhat-marketplace-zmzrz\" (UID: \"3ff8b616-423d-4b7f-9fb8-2ac91fbef324\") " pod="openshift-marketplace/redhat-marketplace-zmzrz" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.761418 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bvj5x"] Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.762628 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bvj5x" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.764825 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.772105 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bvj5x"] Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.804418 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff8b616-423d-4b7f-9fb8-2ac91fbef324-catalog-content\") pod \"redhat-marketplace-zmzrz\" (UID: \"3ff8b616-423d-4b7f-9fb8-2ac91fbef324\") " pod="openshift-marketplace/redhat-marketplace-zmzrz" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.804528 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7-utilities\") pod \"redhat-operators-bvj5x\" (UID: \"7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7\") " pod="openshift-marketplace/redhat-operators-bvj5x" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.804610 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff8b616-423d-4b7f-9fb8-2ac91fbef324-utilities\") pod \"redhat-marketplace-zmzrz\" (UID: \"3ff8b616-423d-4b7f-9fb8-2ac91fbef324\") " pod="openshift-marketplace/redhat-marketplace-zmzrz" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.804723 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7-catalog-content\") pod \"redhat-operators-bvj5x\" (UID: \"7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7\") " pod="openshift-marketplace/redhat-operators-bvj5x" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.804794 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5ffg\" (UniqueName: \"kubernetes.io/projected/7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7-kube-api-access-s5ffg\") pod \"redhat-operators-bvj5x\" (UID: \"7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7\") " pod="openshift-marketplace/redhat-operators-bvj5x" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.804902 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9jqw\" (UniqueName: \"kubernetes.io/projected/3ff8b616-423d-4b7f-9fb8-2ac91fbef324-kube-api-access-k9jqw\") pod \"redhat-marketplace-zmzrz\" (UID: \"3ff8b616-423d-4b7f-9fb8-2ac91fbef324\") " pod="openshift-marketplace/redhat-marketplace-zmzrz" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.805656 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff8b616-423d-4b7f-9fb8-2ac91fbef324-catalog-content\") pod \"redhat-marketplace-zmzrz\" (UID: \"3ff8b616-423d-4b7f-9fb8-2ac91fbef324\") " pod="openshift-marketplace/redhat-marketplace-zmzrz" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.808788 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff8b616-423d-4b7f-9fb8-2ac91fbef324-utilities\") pod \"redhat-marketplace-zmzrz\" (UID: \"3ff8b616-423d-4b7f-9fb8-2ac91fbef324\") " pod="openshift-marketplace/redhat-marketplace-zmzrz" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.831588 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9jqw\" (UniqueName: \"kubernetes.io/projected/3ff8b616-423d-4b7f-9fb8-2ac91fbef324-kube-api-access-k9jqw\") pod \"redhat-marketplace-zmzrz\" (UID: \"3ff8b616-423d-4b7f-9fb8-2ac91fbef324\") " pod="openshift-marketplace/redhat-marketplace-zmzrz" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.881321 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmzrz" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.907781 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7-utilities\") pod \"redhat-operators-bvj5x\" (UID: \"7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7\") " pod="openshift-marketplace/redhat-operators-bvj5x" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.907990 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7-catalog-content\") pod \"redhat-operators-bvj5x\" (UID: \"7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7\") " pod="openshift-marketplace/redhat-operators-bvj5x" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.908277 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5ffg\" (UniqueName: \"kubernetes.io/projected/7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7-kube-api-access-s5ffg\") pod \"redhat-operators-bvj5x\" (UID: \"7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7\") " pod="openshift-marketplace/redhat-operators-bvj5x" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.908707 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7-catalog-content\") pod \"redhat-operators-bvj5x\" (UID: \"7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7\") " pod="openshift-marketplace/redhat-operators-bvj5x" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.908915 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7-utilities\") pod \"redhat-operators-bvj5x\" (UID: \"7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7\") " pod="openshift-marketplace/redhat-operators-bvj5x" Feb 23 00:10:36 crc kubenswrapper[4953]: I0223 00:10:36.923982 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5ffg\" (UniqueName: \"kubernetes.io/projected/7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7-kube-api-access-s5ffg\") pod \"redhat-operators-bvj5x\" (UID: \"7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7\") " pod="openshift-marketplace/redhat-operators-bvj5x" Feb 23 00:10:37 crc kubenswrapper[4953]: I0223 00:10:37.066808 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmzrz"] Feb 23 00:10:37 crc kubenswrapper[4953]: I0223 00:10:37.111171 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bvj5x" Feb 23 00:10:37 crc kubenswrapper[4953]: I0223 00:10:37.162587 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmzrz" event={"ID":"3ff8b616-423d-4b7f-9fb8-2ac91fbef324","Type":"ContainerStarted","Data":"4334880ccb543b3c5d4d44e45f457acd59d715cd244f01c7651b8068f79e8086"} Feb 23 00:10:37 crc kubenswrapper[4953]: I0223 00:10:37.285396 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bvj5x"] Feb 23 00:10:37 crc kubenswrapper[4953]: W0223 00:10:37.294027 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7be3ebf1_fa87_4f28_95d5_a4a0e09b3dc7.slice/crio-5ed6fee6aa389badda8bd200d4129b13db0c57b7a6ac8687785828a5c6dd70e6 WatchSource:0}: Error finding container 5ed6fee6aa389badda8bd200d4129b13db0c57b7a6ac8687785828a5c6dd70e6: Status 404 returned error can't find the container with id 5ed6fee6aa389badda8bd200d4129b13db0c57b7a6ac8687785828a5c6dd70e6 Feb 23 00:10:37 crc kubenswrapper[4953]: I0223 00:10:37.331022 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0033d07e-7400-4307-89d8-efc2e34acee5" path="/var/lib/kubelet/pods/0033d07e-7400-4307-89d8-efc2e34acee5/volumes" Feb 23 00:10:37 crc kubenswrapper[4953]: I0223 00:10:37.331829 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69abc20a-54cb-47c6-884d-e12fd1984fdb" path="/var/lib/kubelet/pods/69abc20a-54cb-47c6-884d-e12fd1984fdb/volumes" Feb 23 00:10:37 crc kubenswrapper[4953]: I0223 00:10:37.332275 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b304dea-fc9b-4fc9-ae6a-6b8377e36578" path="/var/lib/kubelet/pods/8b304dea-fc9b-4fc9-ae6a-6b8377e36578/volumes" Feb 23 00:10:37 crc kubenswrapper[4953]: I0223 00:10:37.333229 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b82e6868-6070-4c8b-9564-c7f0ae98c951" path="/var/lib/kubelet/pods/b82e6868-6070-4c8b-9564-c7f0ae98c951/volumes" Feb 23 00:10:38 crc kubenswrapper[4953]: I0223 00:10:38.169134 4953 generic.go:334] "Generic (PLEG): container finished" podID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" containerID="1a4260648411d255ee3eaad70abf3f308dbe1d235fd3056fd65274c0ecdffb3a" exitCode=0 Feb 23 00:10:38 crc kubenswrapper[4953]: I0223 00:10:38.169247 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvj5x" event={"ID":"7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7","Type":"ContainerDied","Data":"1a4260648411d255ee3eaad70abf3f308dbe1d235fd3056fd65274c0ecdffb3a"} Feb 23 00:10:38 crc kubenswrapper[4953]: I0223 00:10:38.169457 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvj5x" event={"ID":"7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7","Type":"ContainerStarted","Data":"5ed6fee6aa389badda8bd200d4129b13db0c57b7a6ac8687785828a5c6dd70e6"} Feb 23 00:10:38 crc kubenswrapper[4953]: I0223 00:10:38.171679 4953 generic.go:334] "Generic (PLEG): container finished" podID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" containerID="2aa11f794940a50d7d938e869001f064f99fdc6753e52aa912a65a39316c0054" exitCode=0 Feb 23 00:10:38 crc kubenswrapper[4953]: I0223 00:10:38.172072 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmzrz" event={"ID":"3ff8b616-423d-4b7f-9fb8-2ac91fbef324","Type":"ContainerDied","Data":"2aa11f794940a50d7d938e869001f064f99fdc6753e52aa912a65a39316c0054"} Feb 23 00:10:38 crc kubenswrapper[4953]: I0223 00:10:38.961274 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2g5l7"] Feb 23 00:10:38 crc kubenswrapper[4953]: I0223 00:10:38.962225 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2g5l7" Feb 23 00:10:38 crc kubenswrapper[4953]: I0223 00:10:38.964465 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 23 00:10:38 crc kubenswrapper[4953]: I0223 00:10:38.972601 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2g5l7"] Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.034850 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc-catalog-content\") pod \"community-operators-2g5l7\" (UID: \"d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc\") " pod="openshift-marketplace/community-operators-2g5l7" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.034920 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvdq5\" (UniqueName: \"kubernetes.io/projected/d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc-kube-api-access-hvdq5\") pod \"community-operators-2g5l7\" (UID: \"d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc\") " pod="openshift-marketplace/community-operators-2g5l7" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.034955 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc-utilities\") pod \"community-operators-2g5l7\" (UID: \"d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc\") " pod="openshift-marketplace/community-operators-2g5l7" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.135877 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc-utilities\") pod \"community-operators-2g5l7\" (UID: \"d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc\") " pod="openshift-marketplace/community-operators-2g5l7" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.136244 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc-catalog-content\") pod \"community-operators-2g5l7\" (UID: \"d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc\") " pod="openshift-marketplace/community-operators-2g5l7" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.136287 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvdq5\" (UniqueName: \"kubernetes.io/projected/d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc-kube-api-access-hvdq5\") pod \"community-operators-2g5l7\" (UID: \"d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc\") " pod="openshift-marketplace/community-operators-2g5l7" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.136587 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc-utilities\") pod \"community-operators-2g5l7\" (UID: \"d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc\") " pod="openshift-marketplace/community-operators-2g5l7" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.136690 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc-catalog-content\") pod \"community-operators-2g5l7\" (UID: \"d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc\") " pod="openshift-marketplace/community-operators-2g5l7" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.162337 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvdq5\" (UniqueName: \"kubernetes.io/projected/d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc-kube-api-access-hvdq5\") pod \"community-operators-2g5l7\" (UID: \"d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc\") " pod="openshift-marketplace/community-operators-2g5l7" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.163630 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tccw9"] Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.164693 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tccw9" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.168763 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.176160 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tccw9"] Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.179551 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvj5x" event={"ID":"7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7","Type":"ContainerStarted","Data":"86ffbbe504516e485168302a4a95d6e2552787de55de28c8dc9c757ffaa63f60"} Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.180972 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmzrz" event={"ID":"3ff8b616-423d-4b7f-9fb8-2ac91fbef324","Type":"ContainerStarted","Data":"799eb1538a01d71130a16951970a5f9954941bf9a9edc1356d349ac896e0d2b5"} Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.237131 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f8e5178-695c-48c9-a34b-98b5a9659111-utilities\") pod \"certified-operators-tccw9\" (UID: \"4f8e5178-695c-48c9-a34b-98b5a9659111\") " pod="openshift-marketplace/certified-operators-tccw9" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.237218 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f8e5178-695c-48c9-a34b-98b5a9659111-catalog-content\") pod \"certified-operators-tccw9\" (UID: \"4f8e5178-695c-48c9-a34b-98b5a9659111\") " pod="openshift-marketplace/certified-operators-tccw9" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.237250 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dngn8\" (UniqueName: \"kubernetes.io/projected/4f8e5178-695c-48c9-a34b-98b5a9659111-kube-api-access-dngn8\") pod \"certified-operators-tccw9\" (UID: \"4f8e5178-695c-48c9-a34b-98b5a9659111\") " pod="openshift-marketplace/certified-operators-tccw9" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.284932 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2g5l7" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.338689 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dngn8\" (UniqueName: \"kubernetes.io/projected/4f8e5178-695c-48c9-a34b-98b5a9659111-kube-api-access-dngn8\") pod \"certified-operators-tccw9\" (UID: \"4f8e5178-695c-48c9-a34b-98b5a9659111\") " pod="openshift-marketplace/certified-operators-tccw9" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.338776 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f8e5178-695c-48c9-a34b-98b5a9659111-utilities\") pod \"certified-operators-tccw9\" (UID: \"4f8e5178-695c-48c9-a34b-98b5a9659111\") " pod="openshift-marketplace/certified-operators-tccw9" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.338814 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f8e5178-695c-48c9-a34b-98b5a9659111-catalog-content\") pod \"certified-operators-tccw9\" (UID: \"4f8e5178-695c-48c9-a34b-98b5a9659111\") " pod="openshift-marketplace/certified-operators-tccw9" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.339589 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f8e5178-695c-48c9-a34b-98b5a9659111-catalog-content\") pod \"certified-operators-tccw9\" (UID: \"4f8e5178-695c-48c9-a34b-98b5a9659111\") " pod="openshift-marketplace/certified-operators-tccw9" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.339866 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f8e5178-695c-48c9-a34b-98b5a9659111-utilities\") pod \"certified-operators-tccw9\" (UID: \"4f8e5178-695c-48c9-a34b-98b5a9659111\") " pod="openshift-marketplace/certified-operators-tccw9" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.362051 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dngn8\" (UniqueName: \"kubernetes.io/projected/4f8e5178-695c-48c9-a34b-98b5a9659111-kube-api-access-dngn8\") pod \"certified-operators-tccw9\" (UID: \"4f8e5178-695c-48c9-a34b-98b5a9659111\") " pod="openshift-marketplace/certified-operators-tccw9" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.450700 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2g5l7"] Feb 23 00:10:39 crc kubenswrapper[4953]: W0223 00:10:39.458784 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2e714e5_50c5_4a8f_a500_68eb0bc4f5fc.slice/crio-5d7195053afdb3b385394f521ef8678cf8cafd2a301abcfadb1e20e5ef22fd46 WatchSource:0}: Error finding container 5d7195053afdb3b385394f521ef8678cf8cafd2a301abcfadb1e20e5ef22fd46: Status 404 returned error can't find the container with id 5d7195053afdb3b385394f521ef8678cf8cafd2a301abcfadb1e20e5ef22fd46 Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.509534 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tccw9" Feb 23 00:10:39 crc kubenswrapper[4953]: I0223 00:10:39.681280 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tccw9"] Feb 23 00:10:39 crc kubenswrapper[4953]: W0223 00:10:39.690369 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f8e5178_695c_48c9_a34b_98b5a9659111.slice/crio-e8e4eab516238f4389d7852cfbd0086852c2e3e2e05a8d71003301f6dc413dbb WatchSource:0}: Error finding container e8e4eab516238f4389d7852cfbd0086852c2e3e2e05a8d71003301f6dc413dbb: Status 404 returned error can't find the container with id e8e4eab516238f4389d7852cfbd0086852c2e3e2e05a8d71003301f6dc413dbb Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.188128 4953 generic.go:334] "Generic (PLEG): container finished" podID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" containerID="86ffbbe504516e485168302a4a95d6e2552787de55de28c8dc9c757ffaa63f60" exitCode=0 Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.188246 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvj5x" event={"ID":"7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7","Type":"ContainerDied","Data":"86ffbbe504516e485168302a4a95d6e2552787de55de28c8dc9c757ffaa63f60"} Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.192563 4953 generic.go:334] "Generic (PLEG): container finished" podID="4f8e5178-695c-48c9-a34b-98b5a9659111" containerID="e6a8c56a475f95f61aa3cc579a69bc492758fd05aa9cac214a349b3b28ba43fa" exitCode=0 Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.192610 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tccw9" event={"ID":"4f8e5178-695c-48c9-a34b-98b5a9659111","Type":"ContainerDied","Data":"e6a8c56a475f95f61aa3cc579a69bc492758fd05aa9cac214a349b3b28ba43fa"} Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.192650 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tccw9" event={"ID":"4f8e5178-695c-48c9-a34b-98b5a9659111","Type":"ContainerStarted","Data":"e8e4eab516238f4389d7852cfbd0086852c2e3e2e05a8d71003301f6dc413dbb"} Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.200128 4953 generic.go:334] "Generic (PLEG): container finished" podID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" containerID="687b68b80b1bc74f287a37603c97af9a820d0bd4e9850bde8369d115c39e41b7" exitCode=0 Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.200207 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2g5l7" event={"ID":"d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc","Type":"ContainerDied","Data":"687b68b80b1bc74f287a37603c97af9a820d0bd4e9850bde8369d115c39e41b7"} Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.200268 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2g5l7" event={"ID":"d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc","Type":"ContainerStarted","Data":"5d7195053afdb3b385394f521ef8678cf8cafd2a301abcfadb1e20e5ef22fd46"} Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.203900 4953 generic.go:334] "Generic (PLEG): container finished" podID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" containerID="799eb1538a01d71130a16951970a5f9954941bf9a9edc1356d349ac896e0d2b5" exitCode=0 Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.203949 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmzrz" event={"ID":"3ff8b616-423d-4b7f-9fb8-2ac91fbef324","Type":"ContainerDied","Data":"799eb1538a01d71130a16951970a5f9954941bf9a9edc1356d349ac896e0d2b5"} Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.794144 4953 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.795087 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.795341 4953 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.795789 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0" gracePeriod=15 Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.795767 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861" gracePeriod=15 Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.795785 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3" gracePeriod=15 Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.795882 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1" gracePeriod=15 Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.795940 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112" gracePeriod=15 Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.796268 4953 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 00:10:40 crc kubenswrapper[4953]: E0223 00:10:40.796394 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.796408 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 23 00:10:40 crc kubenswrapper[4953]: E0223 00:10:40.796417 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.796424 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 23 00:10:40 crc kubenswrapper[4953]: E0223 00:10:40.796433 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.796440 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 23 00:10:40 crc kubenswrapper[4953]: E0223 00:10:40.796448 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.796454 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 00:10:40 crc kubenswrapper[4953]: E0223 00:10:40.796462 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.796467 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 00:10:40 crc kubenswrapper[4953]: E0223 00:10:40.796474 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.796480 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 00:10:40 crc kubenswrapper[4953]: E0223 00:10:40.796488 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.796493 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.796711 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.796724 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.796734 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.796743 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.796750 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.796756 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 00:10:40 crc kubenswrapper[4953]: E0223 00:10:40.796849 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.796856 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.796928 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.859321 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.859372 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.859390 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.859415 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.859443 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.859468 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.859487 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.859502 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: E0223 00:10:40.866935 4953 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-bvj5x.1896b7a939d86837 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-bvj5x,UID:7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7,APIVersion:v1,ResourceVersion:29706,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 674ms (674ms including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 00:10:40.864708663 +0000 UTC m=+238.798550509,LastTimestamp:2026-02-23 00:10:40.864708663 +0000 UTC m=+238.798550509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 00:10:40 crc kubenswrapper[4953]: E0223 00:10:40.912771 4953 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.960246 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.960548 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.960640 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.960710 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.960855 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.960942 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.961012 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.961084 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.961189 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.960506 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.961359 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.961436 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.961521 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.961601 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.961683 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:10:40 crc kubenswrapper[4953]: I0223 00:10:40.961758 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.209798 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmzrz" event={"ID":"3ff8b616-423d-4b7f-9fb8-2ac91fbef324","Type":"ContainerStarted","Data":"de54988d5da9330b068b9f3b4da467e55278eadde549dd7c5b5c124b0f775489"} Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.211033 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.211258 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.212310 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.213458 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.213665 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.214446 4953 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3" exitCode=0 Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.214467 4953 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1" exitCode=0 Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.214476 4953 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0" exitCode=0 Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.214495 4953 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112" exitCode=2 Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.214545 4953 scope.go:117] "RemoveContainer" containerID="29a7cf4f2803a8b48bdfd36a032ab135cbd552c3904193af5308d45ca67d6e1b" Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.215763 4953 generic.go:334] "Generic (PLEG): container finished" podID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" containerID="079b883eae44b359c1298d83dc807dd02f410810878efb788ffb519a836d0213" exitCode=0 Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.215788 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263","Type":"ContainerDied","Data":"079b883eae44b359c1298d83dc807dd02f410810878efb788ffb519a836d0213"} Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.216460 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.216717 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.216920 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.217670 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tccw9" event={"ID":"4f8e5178-695c-48c9-a34b-98b5a9659111","Type":"ContainerStarted","Data":"e44ddfcf4ae21420efb484f61b02ceb285249dac91e378c6876a339df6833f25"} Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.218167 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.218347 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.218534 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.218839 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.220375 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvj5x" event={"ID":"7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7","Type":"ContainerStarted","Data":"48c30c20d8370f408dcb821a252232f598eeacba8f22e2f5a763e46da4206461"} Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.221049 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.221195 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.221359 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.221549 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:41 crc kubenswrapper[4953]: I0223 00:10:41.225527 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:41 crc kubenswrapper[4953]: W0223 00:10:41.235017 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-58a843c98efddfad7d313d723fe3f96e3cd347eab2db709001572f182a426c9c WatchSource:0}: Error finding container 58a843c98efddfad7d313d723fe3f96e3cd347eab2db709001572f182a426c9c: Status 404 returned error can't find the container with id 58a843c98efddfad7d313d723fe3f96e3cd347eab2db709001572f182a426c9c Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.231443 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.235576 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"fa867ac8d73cdde257b8b877fd93a4e3762ef03128c3e4d1fd8d6f970fc12a3b"} Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.235614 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"58a843c98efddfad7d313d723fe3f96e3cd347eab2db709001572f182a426c9c"} Feb 23 00:10:42 crc kubenswrapper[4953]: E0223 00:10:42.236350 4953 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.236546 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.236852 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.237033 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.237318 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.238207 4953 generic.go:334] "Generic (PLEG): container finished" podID="4f8e5178-695c-48c9-a34b-98b5a9659111" containerID="e44ddfcf4ae21420efb484f61b02ceb285249dac91e378c6876a339df6833f25" exitCode=0 Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.238235 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tccw9" event={"ID":"4f8e5178-695c-48c9-a34b-98b5a9659111","Type":"ContainerDied","Data":"e44ddfcf4ae21420efb484f61b02ceb285249dac91e378c6876a339df6833f25"} Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.238650 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.238843 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.239171 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.239418 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.240054 4953 generic.go:334] "Generic (PLEG): container finished" podID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" containerID="98a8888ec552c75bff00b6964e87fd2cf9823e05686053889880cfdb5cb2cb35" exitCode=0 Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.240077 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2g5l7" event={"ID":"d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc","Type":"ContainerDied","Data":"98a8888ec552c75bff00b6964e87fd2cf9823e05686053889880cfdb5cb2cb35"} Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.240591 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.240786 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.241021 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.241280 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.241547 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.480317 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.481499 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.481885 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.482068 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.482221 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.482392 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.581996 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263-var-lock\") pod \"c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263\" (UID: \"c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263\") " Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.582045 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263-kube-api-access\") pod \"c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263\" (UID: \"c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263\") " Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.582127 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263-kubelet-dir\") pod \"c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263\" (UID: \"c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263\") " Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.582130 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263-var-lock" (OuterVolumeSpecName: "var-lock") pod "c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" (UID: "c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.582278 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" (UID: "c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.582392 4953 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263-var-lock\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.582405 4953 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.588526 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" (UID: "c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:10:42 crc kubenswrapper[4953]: I0223 00:10:42.684003 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.250902 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2g5l7" event={"ID":"d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc","Type":"ContainerStarted","Data":"86baafddc2cf0c75040ee8cf23e4152d196b7193c184a8464cb95a022804f516"} Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.252083 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.252580 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.252967 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.253333 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.253775 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.255760 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.256469 4953 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861" exitCode=0 Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.258032 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.258466 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263","Type":"ContainerDied","Data":"f5b3d4e18a7de0e3b6f93939cb1e2bc1da50f07b977554d199febf8ac384bc1d"} Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.258615 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5b3d4e18a7de0e3b6f93939cb1e2bc1da50f07b977554d199febf8ac384bc1d" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.261210 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tccw9" event={"ID":"4f8e5178-695c-48c9-a34b-98b5a9659111","Type":"ContainerStarted","Data":"db4f0aa8d51c8f1d50ea8ed361ec70917b36a99ec9bac61d0cfa545ca00d8d8b"} Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.266870 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.267404 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.268084 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.268500 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.269001 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.274853 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.275522 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.275855 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.276099 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.276338 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.331034 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.332038 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.332524 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.333448 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.333657 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.679767 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.680604 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.681272 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.681762 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.682093 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.682444 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.682726 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.683057 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.699019 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.699126 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.699150 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.699148 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.699208 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.699325 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.699572 4953 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.699632 4953 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:43 crc kubenswrapper[4953]: I0223 00:10:43.699642 4953 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 23 00:10:44 crc kubenswrapper[4953]: I0223 00:10:44.271217 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 00:10:44 crc kubenswrapper[4953]: I0223 00:10:44.272791 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:10:44 crc kubenswrapper[4953]: I0223 00:10:44.274830 4953 scope.go:117] "RemoveContainer" containerID="4f5fb98daa15be22b4d9bed9195002dbe1e2f92471d31a217139b5fe30b6afe3" Feb 23 00:10:44 crc kubenswrapper[4953]: I0223 00:10:44.287906 4953 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:44 crc kubenswrapper[4953]: I0223 00:10:44.288089 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:44 crc kubenswrapper[4953]: I0223 00:10:44.288276 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:44 crc kubenswrapper[4953]: I0223 00:10:44.288506 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:44 crc kubenswrapper[4953]: I0223 00:10:44.288703 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:44 crc kubenswrapper[4953]: I0223 00:10:44.288916 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:44 crc kubenswrapper[4953]: I0223 00:10:44.301210 4953 scope.go:117] "RemoveContainer" containerID="d200d08707cdb7a874822db964a063ed93d43d4c75b7f36da04052d42cb326f1" Feb 23 00:10:44 crc kubenswrapper[4953]: I0223 00:10:44.321565 4953 scope.go:117] "RemoveContainer" containerID="530aaa156c0c9a06b107352d5e1cf1923948d43aa7a2b5d7ebce60b9ea64aab0" Feb 23 00:10:44 crc kubenswrapper[4953]: E0223 00:10:44.328758 4953 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-bvj5x.1896b7a939d86837 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-bvj5x,UID:7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7,APIVersion:v1,ResourceVersion:29706,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 674ms (674ms including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 00:10:40.864708663 +0000 UTC m=+238.798550509,LastTimestamp:2026-02-23 00:10:40.864708663 +0000 UTC m=+238.798550509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 00:10:44 crc kubenswrapper[4953]: I0223 00:10:44.348263 4953 scope.go:117] "RemoveContainer" containerID="b0da1bbdadee3f27fc4b75dcc0ba2e96c186cd3c553de342d8eeeb1ed5be6112" Feb 23 00:10:44 crc kubenswrapper[4953]: I0223 00:10:44.366178 4953 scope.go:117] "RemoveContainer" containerID="7943bcbae6cfd9dbd36b547aec6c2bca62fcf895044e5df7ee8843522d95c861" Feb 23 00:10:44 crc kubenswrapper[4953]: I0223 00:10:44.389812 4953 scope.go:117] "RemoveContainer" containerID="2fd6c24c04774175c50520b9cb6f5f0e906c64af52ca832f0a5f648dce7b49e5" Feb 23 00:10:45 crc kubenswrapper[4953]: E0223 00:10:45.023636 4953 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:45 crc kubenswrapper[4953]: E0223 00:10:45.024396 4953 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:45 crc kubenswrapper[4953]: E0223 00:10:45.024629 4953 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:45 crc kubenswrapper[4953]: E0223 00:10:45.024829 4953 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:45 crc kubenswrapper[4953]: E0223 00:10:45.025035 4953 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:45 crc kubenswrapper[4953]: I0223 00:10:45.025070 4953 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 23 00:10:45 crc kubenswrapper[4953]: E0223 00:10:45.025275 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="200ms" Feb 23 00:10:45 crc kubenswrapper[4953]: E0223 00:10:45.225970 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="400ms" Feb 23 00:10:45 crc kubenswrapper[4953]: I0223 00:10:45.332959 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 23 00:10:45 crc kubenswrapper[4953]: E0223 00:10:45.627547 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="800ms" Feb 23 00:10:46 crc kubenswrapper[4953]: E0223 00:10:46.428525 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="1.6s" Feb 23 00:10:46 crc kubenswrapper[4953]: I0223 00:10:46.881596 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zmzrz" Feb 23 00:10:46 crc kubenswrapper[4953]: I0223 00:10:46.881633 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zmzrz" Feb 23 00:10:46 crc kubenswrapper[4953]: I0223 00:10:46.932781 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zmzrz" Feb 23 00:10:46 crc kubenswrapper[4953]: I0223 00:10:46.933409 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:46 crc kubenswrapper[4953]: I0223 00:10:46.933819 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:46 crc kubenswrapper[4953]: I0223 00:10:46.934208 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:46 crc kubenswrapper[4953]: I0223 00:10:46.934533 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:46 crc kubenswrapper[4953]: I0223 00:10:46.934866 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.112109 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bvj5x" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.112152 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bvj5x" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.154546 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bvj5x" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.155213 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.155692 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.156047 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.156461 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.156720 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.319972 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bvj5x" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.320722 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.321000 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.321319 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.321714 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.322081 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.331911 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zmzrz" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.332667 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.333036 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.333306 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.333622 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:47 crc kubenswrapper[4953]: I0223 00:10:47.333903 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:48 crc kubenswrapper[4953]: E0223 00:10:48.029962 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="3.2s" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.285456 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2g5l7" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.285878 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2g5l7" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.347188 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2g5l7" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.347869 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.348444 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.348980 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.349334 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.349677 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.407606 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2g5l7" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.408307 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.408759 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.409032 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.409445 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.409738 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.509795 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tccw9" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.509844 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tccw9" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.570567 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tccw9" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.571017 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.571434 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.571792 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.572074 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:49 crc kubenswrapper[4953]: I0223 00:10:49.572406 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:50 crc kubenswrapper[4953]: I0223 00:10:50.351886 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tccw9" Feb 23 00:10:50 crc kubenswrapper[4953]: I0223 00:10:50.352831 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:50 crc kubenswrapper[4953]: I0223 00:10:50.354246 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:50 crc kubenswrapper[4953]: I0223 00:10:50.355063 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:50 crc kubenswrapper[4953]: I0223 00:10:50.355442 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:50 crc kubenswrapper[4953]: I0223 00:10:50.355839 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:51 crc kubenswrapper[4953]: E0223 00:10:51.230939 4953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="6.4s" Feb 23 00:10:52 crc kubenswrapper[4953]: E0223 00:10:52.389278 4953 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" volumeName="registry-storage" Feb 23 00:10:53 crc kubenswrapper[4953]: I0223 00:10:53.328608 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:53 crc kubenswrapper[4953]: I0223 00:10:53.329374 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:53 crc kubenswrapper[4953]: I0223 00:10:53.329878 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:53 crc kubenswrapper[4953]: I0223 00:10:53.330212 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:53 crc kubenswrapper[4953]: I0223 00:10:53.330549 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:54 crc kubenswrapper[4953]: E0223 00:10:54.329590 4953 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-bvj5x.1896b7a939d86837 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-bvj5x,UID:7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7,APIVersion:v1,ResourceVersion:29706,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 674ms (674ms including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 00:10:40.864708663 +0000 UTC m=+238.798550509,LastTimestamp:2026-02-23 00:10:40.864708663 +0000 UTC m=+238.798550509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.325630 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.327369 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.327848 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.328261 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.328701 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.328955 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.334282 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.334379 4953 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929" exitCode=1 Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.334417 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929"} Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.335149 4953 scope.go:117] "RemoveContainer" containerID="eac083e53d0cdcf89345a579d33a08fcecbfda45e50af24bb1f837666454c929" Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.335203 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.335819 4953 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.336401 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.337198 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.337662 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.338032 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.354401 4953 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fb70ef98-98d6-4d13-960f-eafb9095d015" Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.354433 4953 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fb70ef98-98d6-4d13-960f-eafb9095d015" Feb 23 00:10:55 crc kubenswrapper[4953]: E0223 00:10:55.354846 4953 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.355225 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:10:55 crc kubenswrapper[4953]: W0223 00:10:55.376504 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-ddf51fe1d4425df092877fa4fbf279dd1e04c0144ddb4e7b1d30c182b88da0e2 WatchSource:0}: Error finding container ddf51fe1d4425df092877fa4fbf279dd1e04c0144ddb4e7b1d30c182b88da0e2: Status 404 returned error can't find the container with id ddf51fe1d4425df092877fa4fbf279dd1e04c0144ddb4e7b1d30c182b88da0e2 Feb 23 00:10:55 crc kubenswrapper[4953]: E0223 00:10:55.698251 4953 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-b215e4b11ac1c0fea4c14996b4f6d9c05c9afb6c5a1afa2db127a02b001875d2.scope\": RecentStats: unable to find data in memory cache]" Feb 23 00:10:55 crc kubenswrapper[4953]: I0223 00:10:55.867272 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:10:56 crc kubenswrapper[4953]: I0223 00:10:56.342890 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 23 00:10:56 crc kubenswrapper[4953]: I0223 00:10:56.343081 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"38d5690c7ebaa126ddda243c1ed2485df2181c94fd311270bf0d4997c4adebdf"} Feb 23 00:10:56 crc kubenswrapper[4953]: I0223 00:10:56.343874 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:56 crc kubenswrapper[4953]: I0223 00:10:56.344394 4953 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:56 crc kubenswrapper[4953]: I0223 00:10:56.344698 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:56 crc kubenswrapper[4953]: I0223 00:10:56.344886 4953 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="b215e4b11ac1c0fea4c14996b4f6d9c05c9afb6c5a1afa2db127a02b001875d2" exitCode=0 Feb 23 00:10:56 crc kubenswrapper[4953]: I0223 00:10:56.344919 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"b215e4b11ac1c0fea4c14996b4f6d9c05c9afb6c5a1afa2db127a02b001875d2"} Feb 23 00:10:56 crc kubenswrapper[4953]: I0223 00:10:56.344938 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ddf51fe1d4425df092877fa4fbf279dd1e04c0144ddb4e7b1d30c182b88da0e2"} Feb 23 00:10:56 crc kubenswrapper[4953]: I0223 00:10:56.345146 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:56 crc kubenswrapper[4953]: I0223 00:10:56.345196 4953 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fb70ef98-98d6-4d13-960f-eafb9095d015" Feb 23 00:10:56 crc kubenswrapper[4953]: I0223 00:10:56.345317 4953 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fb70ef98-98d6-4d13-960f-eafb9095d015" Feb 23 00:10:56 crc kubenswrapper[4953]: I0223 00:10:56.345526 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:56 crc kubenswrapper[4953]: E0223 00:10:56.345553 4953 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:10:56 crc kubenswrapper[4953]: I0223 00:10:56.345761 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:56 crc kubenswrapper[4953]: I0223 00:10:56.346027 4953 status_manager.go:851] "Failed to get status for pod" podUID="7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7" pod="openshift-marketplace/redhat-operators-bvj5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-bvj5x\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:56 crc kubenswrapper[4953]: I0223 00:10:56.346266 4953 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:56 crc kubenswrapper[4953]: I0223 00:10:56.346486 4953 status_manager.go:851] "Failed to get status for pod" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" pod="openshift-marketplace/redhat-marketplace-zmzrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zmzrz\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:56 crc kubenswrapper[4953]: I0223 00:10:56.346702 4953 status_manager.go:851] "Failed to get status for pod" podUID="d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc" pod="openshift-marketplace/community-operators-2g5l7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2g5l7\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:56 crc kubenswrapper[4953]: I0223 00:10:56.346978 4953 status_manager.go:851] "Failed to get status for pod" podUID="4f8e5178-695c-48c9-a34b-98b5a9659111" pod="openshift-marketplace/certified-operators-tccw9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-tccw9\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:56 crc kubenswrapper[4953]: I0223 00:10:56.347307 4953 status_manager.go:851] "Failed to get status for pod" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Feb 23 00:10:57 crc kubenswrapper[4953]: I0223 00:10:57.353729 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0f1719ba9b62649b450be162bfd962ec3c0b8cd026b30b4e20e99928709fdfc6"} Feb 23 00:10:57 crc kubenswrapper[4953]: I0223 00:10:57.354371 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c2270c4f991e4bce7f330e88a403e5e84303add9a99fb464d117324baf65321c"} Feb 23 00:10:57 crc kubenswrapper[4953]: I0223 00:10:57.354383 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"85eb8a54d5d11b5dbe9d746591d279ac538479321c94308f6a5f5ac4ffb6a10c"} Feb 23 00:10:57 crc kubenswrapper[4953]: I0223 00:10:57.789361 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:10:57 crc kubenswrapper[4953]: I0223 00:10:57.789662 4953 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 23 00:10:57 crc kubenswrapper[4953]: I0223 00:10:57.789706 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 23 00:10:58 crc kubenswrapper[4953]: I0223 00:10:58.362604 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7efd60a38a4be4074615a9c40e6312c7de607b8c7b486278e0f83516ceb2f6f7"} Feb 23 00:10:58 crc kubenswrapper[4953]: I0223 00:10:58.363000 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"04ad3d5fcecaea6928ab7054a8f92fec3f062a1d8c9df8be305b1ea789e1cd7b"} Feb 23 00:10:58 crc kubenswrapper[4953]: I0223 00:10:58.363155 4953 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fb70ef98-98d6-4d13-960f-eafb9095d015" Feb 23 00:10:58 crc kubenswrapper[4953]: I0223 00:10:58.363209 4953 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fb70ef98-98d6-4d13-960f-eafb9095d015" Feb 23 00:10:58 crc kubenswrapper[4953]: I0223 00:10:58.771361 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:11:00 crc kubenswrapper[4953]: I0223 00:11:00.504687 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:11:00 crc kubenswrapper[4953]: I0223 00:11:00.504739 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:11:00 crc kubenswrapper[4953]: I0223 00:11:00.516029 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:11:03 crc kubenswrapper[4953]: I0223 00:11:03.372517 4953 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:11:03 crc kubenswrapper[4953]: I0223 00:11:03.525777 4953 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9be9bd27-f021-40dd-8c5e-e6891dd8d686" Feb 23 00:11:03 crc kubenswrapper[4953]: I0223 00:11:03.539599 4953 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fb70ef98-98d6-4d13-960f-eafb9095d015" Feb 23 00:11:03 crc kubenswrapper[4953]: I0223 00:11:03.539642 4953 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fb70ef98-98d6-4d13-960f-eafb9095d015" Feb 23 00:11:03 crc kubenswrapper[4953]: I0223 00:11:03.539937 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:11:03 crc kubenswrapper[4953]: I0223 00:11:03.543435 4953 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9be9bd27-f021-40dd-8c5e-e6891dd8d686" Feb 23 00:11:03 crc kubenswrapper[4953]: I0223 00:11:03.545508 4953 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://85eb8a54d5d11b5dbe9d746591d279ac538479321c94308f6a5f5ac4ffb6a10c" Feb 23 00:11:03 crc kubenswrapper[4953]: I0223 00:11:03.545556 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:11:04 crc kubenswrapper[4953]: I0223 00:11:04.546654 4953 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fb70ef98-98d6-4d13-960f-eafb9095d015" Feb 23 00:11:04 crc kubenswrapper[4953]: I0223 00:11:04.546714 4953 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fb70ef98-98d6-4d13-960f-eafb9095d015" Feb 23 00:11:04 crc kubenswrapper[4953]: I0223 00:11:04.550171 4953 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9be9bd27-f021-40dd-8c5e-e6891dd8d686" Feb 23 00:11:05 crc kubenswrapper[4953]: I0223 00:11:05.552453 4953 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fb70ef98-98d6-4d13-960f-eafb9095d015" Feb 23 00:11:05 crc kubenswrapper[4953]: I0223 00:11:05.552864 4953 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fb70ef98-98d6-4d13-960f-eafb9095d015" Feb 23 00:11:05 crc kubenswrapper[4953]: I0223 00:11:05.556966 4953 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9be9bd27-f021-40dd-8c5e-e6891dd8d686" Feb 23 00:11:07 crc kubenswrapper[4953]: I0223 00:11:07.789465 4953 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 23 00:11:07 crc kubenswrapper[4953]: I0223 00:11:07.790000 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 23 00:11:13 crc kubenswrapper[4953]: I0223 00:11:13.150741 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 00:11:13 crc kubenswrapper[4953]: I0223 00:11:13.781718 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 23 00:11:13 crc kubenswrapper[4953]: I0223 00:11:13.921919 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 00:11:14 crc kubenswrapper[4953]: I0223 00:11:14.165283 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 00:11:14 crc kubenswrapper[4953]: I0223 00:11:14.507467 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 00:11:14 crc kubenswrapper[4953]: I0223 00:11:14.648966 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 23 00:11:14 crc kubenswrapper[4953]: I0223 00:11:14.678798 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 00:11:14 crc kubenswrapper[4953]: I0223 00:11:14.773741 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 00:11:14 crc kubenswrapper[4953]: I0223 00:11:14.790277 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 23 00:11:14 crc kubenswrapper[4953]: I0223 00:11:14.945541 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 00:11:15 crc kubenswrapper[4953]: I0223 00:11:15.168207 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 00:11:15 crc kubenswrapper[4953]: I0223 00:11:15.355950 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 00:11:15 crc kubenswrapper[4953]: I0223 00:11:15.545262 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 23 00:11:15 crc kubenswrapper[4953]: I0223 00:11:15.597629 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 00:11:15 crc kubenswrapper[4953]: I0223 00:11:15.702933 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 00:11:15 crc kubenswrapper[4953]: I0223 00:11:15.860388 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 23 00:11:15 crc kubenswrapper[4953]: I0223 00:11:15.979957 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 00:11:16 crc kubenswrapper[4953]: I0223 00:11:16.029790 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 23 00:11:16 crc kubenswrapper[4953]: I0223 00:11:16.084887 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 00:11:16 crc kubenswrapper[4953]: I0223 00:11:16.185420 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 23 00:11:16 crc kubenswrapper[4953]: I0223 00:11:16.241049 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 00:11:16 crc kubenswrapper[4953]: I0223 00:11:16.386819 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 23 00:11:16 crc kubenswrapper[4953]: I0223 00:11:16.402538 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 23 00:11:16 crc kubenswrapper[4953]: I0223 00:11:16.576894 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 00:11:16 crc kubenswrapper[4953]: I0223 00:11:16.637455 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 00:11:16 crc kubenswrapper[4953]: I0223 00:11:16.757046 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 23 00:11:16 crc kubenswrapper[4953]: I0223 00:11:16.768157 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 23 00:11:16 crc kubenswrapper[4953]: I0223 00:11:16.793345 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 00:11:16 crc kubenswrapper[4953]: I0223 00:11:16.803390 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 00:11:16 crc kubenswrapper[4953]: I0223 00:11:16.808699 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 23 00:11:16 crc kubenswrapper[4953]: I0223 00:11:16.908360 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 00:11:16 crc kubenswrapper[4953]: I0223 00:11:16.999681 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 23 00:11:17 crc kubenswrapper[4953]: I0223 00:11:17.108872 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 00:11:17 crc kubenswrapper[4953]: I0223 00:11:17.211063 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 00:11:17 crc kubenswrapper[4953]: I0223 00:11:17.231393 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 23 00:11:17 crc kubenswrapper[4953]: I0223 00:11:17.232197 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 00:11:17 crc kubenswrapper[4953]: I0223 00:11:17.237645 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 23 00:11:17 crc kubenswrapper[4953]: I0223 00:11:17.274127 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 23 00:11:17 crc kubenswrapper[4953]: I0223 00:11:17.291923 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 23 00:11:17 crc kubenswrapper[4953]: I0223 00:11:17.422394 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 23 00:11:17 crc kubenswrapper[4953]: I0223 00:11:17.430613 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 00:11:17 crc kubenswrapper[4953]: I0223 00:11:17.451376 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 23 00:11:17 crc kubenswrapper[4953]: I0223 00:11:17.601813 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 00:11:17 crc kubenswrapper[4953]: I0223 00:11:17.703638 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 23 00:11:17 crc kubenswrapper[4953]: I0223 00:11:17.793839 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 23 00:11:17 crc kubenswrapper[4953]: I0223 00:11:17.794954 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:11:17 crc kubenswrapper[4953]: I0223 00:11:17.799814 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 00:11:17 crc kubenswrapper[4953]: I0223 00:11:17.879469 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 00:11:17 crc kubenswrapper[4953]: I0223 00:11:17.907448 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 23 00:11:17 crc kubenswrapper[4953]: I0223 00:11:17.938535 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.052916 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.084549 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.109954 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.123354 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.203477 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.261568 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.269993 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.280983 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.349001 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.407115 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.428578 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.429918 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.562002 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.563013 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.706466 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.765171 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.767172 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.787080 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.809829 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.810448 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.888894 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.896585 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 23 00:11:18 crc kubenswrapper[4953]: I0223 00:11:18.929207 4953 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 23 00:11:19 crc kubenswrapper[4953]: I0223 00:11:19.069229 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 23 00:11:19 crc kubenswrapper[4953]: I0223 00:11:19.098558 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 00:11:19 crc kubenswrapper[4953]: I0223 00:11:19.240087 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 00:11:19 crc kubenswrapper[4953]: I0223 00:11:19.333278 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 00:11:19 crc kubenswrapper[4953]: I0223 00:11:19.367436 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 23 00:11:19 crc kubenswrapper[4953]: I0223 00:11:19.440185 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 00:11:19 crc kubenswrapper[4953]: I0223 00:11:19.451839 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 00:11:19 crc kubenswrapper[4953]: I0223 00:11:19.504411 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 23 00:11:19 crc kubenswrapper[4953]: I0223 00:11:19.508346 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 23 00:11:19 crc kubenswrapper[4953]: I0223 00:11:19.577890 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 00:11:19 crc kubenswrapper[4953]: I0223 00:11:19.638010 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 23 00:11:19 crc kubenswrapper[4953]: I0223 00:11:19.709055 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 23 00:11:19 crc kubenswrapper[4953]: I0223 00:11:19.745626 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 23 00:11:19 crc kubenswrapper[4953]: I0223 00:11:19.819815 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 00:11:19 crc kubenswrapper[4953]: I0223 00:11:19.865530 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 00:11:19 crc kubenswrapper[4953]: I0223 00:11:19.873274 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 00:11:19 crc kubenswrapper[4953]: I0223 00:11:19.923112 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.036439 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.054081 4953 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.097110 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.115570 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.323884 4953 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.401348 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.419439 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.472184 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.508700 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.546634 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.551003 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.709590 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.733672 4953 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.779565 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.849815 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.854103 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.885280 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.907436 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.931730 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.943189 4953 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.943584 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zmzrz" podStartSLOduration=42.371001937 podStartE2EDuration="44.94355906s" podCreationTimestamp="2026-02-23 00:10:36 +0000 UTC" firstStartedPulling="2026-02-23 00:10:38.173256959 +0000 UTC m=+236.107098805" lastFinishedPulling="2026-02-23 00:10:40.745814082 +0000 UTC m=+238.679655928" observedRunningTime="2026-02-23 00:11:03.414609404 +0000 UTC m=+261.348451250" watchObservedRunningTime="2026-02-23 00:11:20.94355906 +0000 UTC m=+278.877400906" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.944033 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bvj5x" podStartSLOduration=42.251018457 podStartE2EDuration="44.944026253s" podCreationTimestamp="2026-02-23 00:10:36 +0000 UTC" firstStartedPulling="2026-02-23 00:10:38.171682257 +0000 UTC m=+236.105524113" lastFinishedPulling="2026-02-23 00:10:40.864690063 +0000 UTC m=+238.798531909" observedRunningTime="2026-02-23 00:11:03.491187946 +0000 UTC m=+261.425029792" watchObservedRunningTime="2026-02-23 00:11:20.944026253 +0000 UTC m=+278.877868099" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.945818 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2g5l7" podStartSLOduration=40.529637774 podStartE2EDuration="42.945812464s" podCreationTimestamp="2026-02-23 00:10:38 +0000 UTC" firstStartedPulling="2026-02-23 00:10:40.203765513 +0000 UTC m=+238.137607359" lastFinishedPulling="2026-02-23 00:10:42.619940203 +0000 UTC m=+240.553782049" observedRunningTime="2026-02-23 00:11:03.433805691 +0000 UTC m=+261.367647537" watchObservedRunningTime="2026-02-23 00:11:20.945812464 +0000 UTC m=+278.879654310" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.946264 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tccw9" podStartSLOduration=39.456633997 podStartE2EDuration="41.946261347s" podCreationTimestamp="2026-02-23 00:10:39 +0000 UTC" firstStartedPulling="2026-02-23 00:10:40.198341548 +0000 UTC m=+238.132183394" lastFinishedPulling="2026-02-23 00:10:42.687968898 +0000 UTC m=+240.621810744" observedRunningTime="2026-02-23 00:11:03.453042379 +0000 UTC m=+261.386884225" watchObservedRunningTime="2026-02-23 00:11:20.946261347 +0000 UTC m=+278.880103193" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.947919 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.947969 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.947991 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.952220 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.968593 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.968568103 podStartE2EDuration="17.968568103s" podCreationTimestamp="2026-02-23 00:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:11:20.965787324 +0000 UTC m=+278.899629170" watchObservedRunningTime="2026-02-23 00:11:20.968568103 +0000 UTC m=+278.902409949" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.969473 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.983464 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 23 00:11:20 crc kubenswrapper[4953]: I0223 00:11:20.997007 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 23 00:11:21 crc kubenswrapper[4953]: I0223 00:11:21.110058 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 23 00:11:21 crc kubenswrapper[4953]: I0223 00:11:21.167617 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 23 00:11:21 crc kubenswrapper[4953]: I0223 00:11:21.207414 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 00:11:21 crc kubenswrapper[4953]: I0223 00:11:21.226996 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 00:11:21 crc kubenswrapper[4953]: I0223 00:11:21.271020 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 23 00:11:21 crc kubenswrapper[4953]: I0223 00:11:21.318355 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 00:11:21 crc kubenswrapper[4953]: I0223 00:11:21.351833 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 00:11:21 crc kubenswrapper[4953]: I0223 00:11:21.536732 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 00:11:21 crc kubenswrapper[4953]: I0223 00:11:21.542010 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 23 00:11:21 crc kubenswrapper[4953]: I0223 00:11:21.599153 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 00:11:21 crc kubenswrapper[4953]: I0223 00:11:21.654747 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 00:11:21 crc kubenswrapper[4953]: I0223 00:11:21.689188 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 23 00:11:21 crc kubenswrapper[4953]: I0223 00:11:21.812729 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 23 00:11:21 crc kubenswrapper[4953]: I0223 00:11:21.829174 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 00:11:21 crc kubenswrapper[4953]: I0223 00:11:21.839152 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 23 00:11:21 crc kubenswrapper[4953]: I0223 00:11:21.986725 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 23 00:11:21 crc kubenswrapper[4953]: I0223 00:11:21.997476 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.004805 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.128575 4953 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.135783 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.178025 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.228320 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.410825 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.419734 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.425835 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.436616 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.452333 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.459521 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.530279 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.607663 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.607893 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.607899 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.678780 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.696897 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.812191 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.827954 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.893486 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.925424 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.941790 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.954754 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.973798 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 23 00:11:22 crc kubenswrapper[4953]: I0223 00:11:22.988588 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 23 00:11:23 crc kubenswrapper[4953]: I0223 00:11:23.014248 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 00:11:23 crc kubenswrapper[4953]: I0223 00:11:23.069272 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 23 00:11:23 crc kubenswrapper[4953]: I0223 00:11:23.099829 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 00:11:23 crc kubenswrapper[4953]: I0223 00:11:23.104429 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 00:11:23 crc kubenswrapper[4953]: I0223 00:11:23.148686 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 00:11:23 crc kubenswrapper[4953]: I0223 00:11:23.178362 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 00:11:23 crc kubenswrapper[4953]: I0223 00:11:23.225255 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 23 00:11:23 crc kubenswrapper[4953]: I0223 00:11:23.354518 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 23 00:11:23 crc kubenswrapper[4953]: I0223 00:11:23.477748 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 23 00:11:23 crc kubenswrapper[4953]: I0223 00:11:23.477976 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 00:11:23 crc kubenswrapper[4953]: I0223 00:11:23.610721 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 00:11:23 crc kubenswrapper[4953]: I0223 00:11:23.635046 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 23 00:11:23 crc kubenswrapper[4953]: I0223 00:11:23.636727 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 00:11:23 crc kubenswrapper[4953]: I0223 00:11:23.713249 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 23 00:11:23 crc kubenswrapper[4953]: I0223 00:11:23.781267 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 23 00:11:23 crc kubenswrapper[4953]: I0223 00:11:23.804851 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 00:11:23 crc kubenswrapper[4953]: I0223 00:11:23.905764 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 23 00:11:23 crc kubenswrapper[4953]: I0223 00:11:23.930021 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 00:11:23 crc kubenswrapper[4953]: I0223 00:11:23.964134 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 23 00:11:24 crc kubenswrapper[4953]: I0223 00:11:24.010238 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 23 00:11:24 crc kubenswrapper[4953]: I0223 00:11:24.015851 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 00:11:24 crc kubenswrapper[4953]: I0223 00:11:24.046229 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 23 00:11:24 crc kubenswrapper[4953]: I0223 00:11:24.051442 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 00:11:24 crc kubenswrapper[4953]: I0223 00:11:24.054445 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 23 00:11:24 crc kubenswrapper[4953]: I0223 00:11:24.068789 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 23 00:11:24 crc kubenswrapper[4953]: I0223 00:11:24.069317 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 23 00:11:24 crc kubenswrapper[4953]: I0223 00:11:24.161484 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 23 00:11:24 crc kubenswrapper[4953]: I0223 00:11:24.264866 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 00:11:24 crc kubenswrapper[4953]: I0223 00:11:24.375916 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 00:11:24 crc kubenswrapper[4953]: I0223 00:11:24.414788 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 00:11:24 crc kubenswrapper[4953]: I0223 00:11:24.437854 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 00:11:24 crc kubenswrapper[4953]: I0223 00:11:24.519899 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 00:11:24 crc kubenswrapper[4953]: I0223 00:11:24.529943 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 23 00:11:24 crc kubenswrapper[4953]: I0223 00:11:24.532529 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 23 00:11:24 crc kubenswrapper[4953]: I0223 00:11:24.822392 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 00:11:24 crc kubenswrapper[4953]: I0223 00:11:24.877264 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 23 00:11:24 crc kubenswrapper[4953]: I0223 00:11:24.883459 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 00:11:24 crc kubenswrapper[4953]: I0223 00:11:24.951766 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 23 00:11:25 crc kubenswrapper[4953]: I0223 00:11:25.021718 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 23 00:11:25 crc kubenswrapper[4953]: I0223 00:11:25.076279 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 23 00:11:25 crc kubenswrapper[4953]: I0223 00:11:25.114382 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 00:11:25 crc kubenswrapper[4953]: I0223 00:11:25.117384 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 00:11:25 crc kubenswrapper[4953]: I0223 00:11:25.208807 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 00:11:25 crc kubenswrapper[4953]: I0223 00:11:25.208930 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 23 00:11:25 crc kubenswrapper[4953]: I0223 00:11:25.428903 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 00:11:25 crc kubenswrapper[4953]: I0223 00:11:25.564372 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 23 00:11:25 crc kubenswrapper[4953]: I0223 00:11:25.717817 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 23 00:11:25 crc kubenswrapper[4953]: I0223 00:11:25.740173 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 23 00:11:25 crc kubenswrapper[4953]: I0223 00:11:25.892462 4953 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 23 00:11:25 crc kubenswrapper[4953]: I0223 00:11:25.892805 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://fa867ac8d73cdde257b8b877fd93a4e3762ef03128c3e4d1fd8d6f970fc12a3b" gracePeriod=5 Feb 23 00:11:25 crc kubenswrapper[4953]: I0223 00:11:25.909525 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 00:11:25 crc kubenswrapper[4953]: I0223 00:11:25.962731 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 00:11:25 crc kubenswrapper[4953]: I0223 00:11:25.998538 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 23 00:11:26 crc kubenswrapper[4953]: I0223 00:11:26.099938 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 00:11:26 crc kubenswrapper[4953]: I0223 00:11:26.185302 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 23 00:11:26 crc kubenswrapper[4953]: I0223 00:11:26.460437 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 23 00:11:26 crc kubenswrapper[4953]: I0223 00:11:26.560919 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 23 00:11:26 crc kubenswrapper[4953]: I0223 00:11:26.612078 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 23 00:11:26 crc kubenswrapper[4953]: I0223 00:11:26.636349 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 23 00:11:26 crc kubenswrapper[4953]: I0223 00:11:26.648412 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 00:11:26 crc kubenswrapper[4953]: I0223 00:11:26.667585 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 23 00:11:26 crc kubenswrapper[4953]: I0223 00:11:26.723617 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 00:11:26 crc kubenswrapper[4953]: I0223 00:11:26.861776 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 23 00:11:26 crc kubenswrapper[4953]: I0223 00:11:26.865673 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 23 00:11:26 crc kubenswrapper[4953]: I0223 00:11:26.926544 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 00:11:27 crc kubenswrapper[4953]: I0223 00:11:27.051403 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 00:11:27 crc kubenswrapper[4953]: I0223 00:11:27.060500 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 00:11:27 crc kubenswrapper[4953]: I0223 00:11:27.094862 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 00:11:27 crc kubenswrapper[4953]: I0223 00:11:27.195860 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 23 00:11:27 crc kubenswrapper[4953]: I0223 00:11:27.399856 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 00:11:27 crc kubenswrapper[4953]: I0223 00:11:27.521019 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 23 00:11:27 crc kubenswrapper[4953]: I0223 00:11:27.545253 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 23 00:11:27 crc kubenswrapper[4953]: I0223 00:11:27.611791 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 00:11:27 crc kubenswrapper[4953]: I0223 00:11:27.629395 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 23 00:11:27 crc kubenswrapper[4953]: I0223 00:11:27.817429 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 00:11:27 crc kubenswrapper[4953]: I0223 00:11:27.819186 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 23 00:11:27 crc kubenswrapper[4953]: I0223 00:11:27.891725 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 00:11:27 crc kubenswrapper[4953]: I0223 00:11:27.900158 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 00:11:28 crc kubenswrapper[4953]: I0223 00:11:28.045326 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 00:11:28 crc kubenswrapper[4953]: I0223 00:11:28.049279 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 23 00:11:28 crc kubenswrapper[4953]: I0223 00:11:28.083592 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 23 00:11:28 crc kubenswrapper[4953]: I0223 00:11:28.161566 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 00:11:28 crc kubenswrapper[4953]: I0223 00:11:28.254738 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 00:11:28 crc kubenswrapper[4953]: I0223 00:11:28.364846 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 00:11:28 crc kubenswrapper[4953]: I0223 00:11:28.389528 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 00:11:28 crc kubenswrapper[4953]: I0223 00:11:28.556182 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 00:11:28 crc kubenswrapper[4953]: I0223 00:11:28.684250 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 23 00:11:28 crc kubenswrapper[4953]: I0223 00:11:28.689463 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 00:11:28 crc kubenswrapper[4953]: I0223 00:11:28.761623 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 00:11:28 crc kubenswrapper[4953]: I0223 00:11:28.957626 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 23 00:11:29 crc kubenswrapper[4953]: I0223 00:11:29.053541 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 23 00:11:29 crc kubenswrapper[4953]: I0223 00:11:29.086871 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 00:11:29 crc kubenswrapper[4953]: I0223 00:11:29.104256 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 00:11:29 crc kubenswrapper[4953]: I0223 00:11:29.630080 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 23 00:11:29 crc kubenswrapper[4953]: I0223 00:11:29.788417 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.521371 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.521471 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.658301 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.658343 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.658363 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.658380 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.658410 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.658852 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.658906 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.658932 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.659041 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.749771 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.749827 4953 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="fa867ac8d73cdde257b8b877fd93a4e3762ef03128c3e4d1fd8d6f970fc12a3b" exitCode=137 Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.749881 4953 scope.go:117] "RemoveContainer" containerID="fa867ac8d73cdde257b8b877fd93a4e3762ef03128c3e4d1fd8d6f970fc12a3b" Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.750009 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.760242 4953 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.760729 4953 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.760741 4953 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.760752 4953 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.768239 4953 scope.go:117] "RemoveContainer" containerID="fa867ac8d73cdde257b8b877fd93a4e3762ef03128c3e4d1fd8d6f970fc12a3b" Feb 23 00:11:31 crc kubenswrapper[4953]: E0223 00:11:31.768721 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa867ac8d73cdde257b8b877fd93a4e3762ef03128c3e4d1fd8d6f970fc12a3b\": container with ID starting with fa867ac8d73cdde257b8b877fd93a4e3762ef03128c3e4d1fd8d6f970fc12a3b not found: ID does not exist" containerID="fa867ac8d73cdde257b8b877fd93a4e3762ef03128c3e4d1fd8d6f970fc12a3b" Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.768762 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa867ac8d73cdde257b8b877fd93a4e3762ef03128c3e4d1fd8d6f970fc12a3b"} err="failed to get container status \"fa867ac8d73cdde257b8b877fd93a4e3762ef03128c3e4d1fd8d6f970fc12a3b\": rpc error: code = NotFound desc = could not find container \"fa867ac8d73cdde257b8b877fd93a4e3762ef03128c3e4d1fd8d6f970fc12a3b\": container with ID starting with fa867ac8d73cdde257b8b877fd93a4e3762ef03128c3e4d1fd8d6f970fc12a3b not found: ID does not exist" Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.806207 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:11:31 crc kubenswrapper[4953]: I0223 00:11:31.862126 4953 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:33 crc kubenswrapper[4953]: I0223 00:11:33.333388 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 23 00:11:42 crc kubenswrapper[4953]: I0223 00:11:42.945729 4953 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 23 00:11:43 crc kubenswrapper[4953]: I0223 00:11:43.827475 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 00:11:50 crc kubenswrapper[4953]: I0223 00:11:50.952452 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 23 00:11:52 crc kubenswrapper[4953]: I0223 00:11:52.248604 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 00:11:52 crc kubenswrapper[4953]: I0223 00:11:52.598938 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 00:11:55 crc kubenswrapper[4953]: I0223 00:11:55.888180 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vtp7m"] Feb 23 00:11:55 crc kubenswrapper[4953]: I0223 00:11:55.889209 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" podUID="be696be4-2c84-4434-9d93-804c9bb6604b" containerName="controller-manager" containerID="cri-o://2ebef613bdd3641e48e660106ae77eea9c77a66749206f75b6dca89698ab9327" gracePeriod=30 Feb 23 00:11:55 crc kubenswrapper[4953]: I0223 00:11:55.975797 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc"] Feb 23 00:11:55 crc kubenswrapper[4953]: I0223 00:11:55.976113 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" podUID="ae22cab2-d791-4513-8794-e5d93b7447e5" containerName="route-controller-manager" containerID="cri-o://6dce8eb176193ccecee5255cafd0e1eb217f90dfb1589a0ed950fa4805d0d7df" gracePeriod=30 Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.256311 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.418073 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be696be4-2c84-4434-9d93-804c9bb6604b-config\") pod \"be696be4-2c84-4434-9d93-804c9bb6604b\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.418245 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be696be4-2c84-4434-9d93-804c9bb6604b-client-ca\") pod \"be696be4-2c84-4434-9d93-804c9bb6604b\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.418318 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be696be4-2c84-4434-9d93-804c9bb6604b-serving-cert\") pod \"be696be4-2c84-4434-9d93-804c9bb6604b\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.418358 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf77c\" (UniqueName: \"kubernetes.io/projected/be696be4-2c84-4434-9d93-804c9bb6604b-kube-api-access-qf77c\") pod \"be696be4-2c84-4434-9d93-804c9bb6604b\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.418384 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be696be4-2c84-4434-9d93-804c9bb6604b-proxy-ca-bundles\") pod \"be696be4-2c84-4434-9d93-804c9bb6604b\" (UID: \"be696be4-2c84-4434-9d93-804c9bb6604b\") " Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.419309 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be696be4-2c84-4434-9d93-804c9bb6604b-config" (OuterVolumeSpecName: "config") pod "be696be4-2c84-4434-9d93-804c9bb6604b" (UID: "be696be4-2c84-4434-9d93-804c9bb6604b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.419589 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.419890 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be696be4-2c84-4434-9d93-804c9bb6604b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "be696be4-2c84-4434-9d93-804c9bb6604b" (UID: "be696be4-2c84-4434-9d93-804c9bb6604b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.420348 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be696be4-2c84-4434-9d93-804c9bb6604b-client-ca" (OuterVolumeSpecName: "client-ca") pod "be696be4-2c84-4434-9d93-804c9bb6604b" (UID: "be696be4-2c84-4434-9d93-804c9bb6604b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.426058 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be696be4-2c84-4434-9d93-804c9bb6604b-kube-api-access-qf77c" (OuterVolumeSpecName: "kube-api-access-qf77c") pod "be696be4-2c84-4434-9d93-804c9bb6604b" (UID: "be696be4-2c84-4434-9d93-804c9bb6604b"). InnerVolumeSpecName "kube-api-access-qf77c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.426271 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be696be4-2c84-4434-9d93-804c9bb6604b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "be696be4-2c84-4434-9d93-804c9bb6604b" (UID: "be696be4-2c84-4434-9d93-804c9bb6604b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.519763 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gmlm\" (UniqueName: \"kubernetes.io/projected/ae22cab2-d791-4513-8794-e5d93b7447e5-kube-api-access-4gmlm\") pod \"ae22cab2-d791-4513-8794-e5d93b7447e5\" (UID: \"ae22cab2-d791-4513-8794-e5d93b7447e5\") " Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.519932 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae22cab2-d791-4513-8794-e5d93b7447e5-config\") pod \"ae22cab2-d791-4513-8794-e5d93b7447e5\" (UID: \"ae22cab2-d791-4513-8794-e5d93b7447e5\") " Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.519960 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae22cab2-d791-4513-8794-e5d93b7447e5-client-ca\") pod \"ae22cab2-d791-4513-8794-e5d93b7447e5\" (UID: \"ae22cab2-d791-4513-8794-e5d93b7447e5\") " Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.520010 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae22cab2-d791-4513-8794-e5d93b7447e5-serving-cert\") pod \"ae22cab2-d791-4513-8794-e5d93b7447e5\" (UID: \"ae22cab2-d791-4513-8794-e5d93b7447e5\") " Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.520379 4953 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be696be4-2c84-4434-9d93-804c9bb6604b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.520424 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be696be4-2c84-4434-9d93-804c9bb6604b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.520441 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf77c\" (UniqueName: \"kubernetes.io/projected/be696be4-2c84-4434-9d93-804c9bb6604b-kube-api-access-qf77c\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.520466 4953 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be696be4-2c84-4434-9d93-804c9bb6604b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.520481 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be696be4-2c84-4434-9d93-804c9bb6604b-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.521238 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae22cab2-d791-4513-8794-e5d93b7447e5-client-ca" (OuterVolumeSpecName: "client-ca") pod "ae22cab2-d791-4513-8794-e5d93b7447e5" (UID: "ae22cab2-d791-4513-8794-e5d93b7447e5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.521463 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae22cab2-d791-4513-8794-e5d93b7447e5-config" (OuterVolumeSpecName: "config") pod "ae22cab2-d791-4513-8794-e5d93b7447e5" (UID: "ae22cab2-d791-4513-8794-e5d93b7447e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.524157 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae22cab2-d791-4513-8794-e5d93b7447e5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ae22cab2-d791-4513-8794-e5d93b7447e5" (UID: "ae22cab2-d791-4513-8794-e5d93b7447e5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.524223 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae22cab2-d791-4513-8794-e5d93b7447e5-kube-api-access-4gmlm" (OuterVolumeSpecName: "kube-api-access-4gmlm") pod "ae22cab2-d791-4513-8794-e5d93b7447e5" (UID: "ae22cab2-d791-4513-8794-e5d93b7447e5"). InnerVolumeSpecName "kube-api-access-4gmlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.638402 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae22cab2-d791-4513-8794-e5d93b7447e5-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.638461 4953 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae22cab2-d791-4513-8794-e5d93b7447e5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.638481 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae22cab2-d791-4513-8794-e5d93b7447e5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.638496 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gmlm\" (UniqueName: \"kubernetes.io/projected/ae22cab2-d791-4513-8794-e5d93b7447e5-kube-api-access-4gmlm\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.920952 4953 generic.go:334] "Generic (PLEG): container finished" podID="ae22cab2-d791-4513-8794-e5d93b7447e5" containerID="6dce8eb176193ccecee5255cafd0e1eb217f90dfb1589a0ed950fa4805d0d7df" exitCode=0 Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.921005 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" event={"ID":"ae22cab2-d791-4513-8794-e5d93b7447e5","Type":"ContainerDied","Data":"6dce8eb176193ccecee5255cafd0e1eb217f90dfb1589a0ed950fa4805d0d7df"} Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.921048 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" event={"ID":"ae22cab2-d791-4513-8794-e5d93b7447e5","Type":"ContainerDied","Data":"7327d209d6df8a649fbf876cb3105de94243316184ad78871ce94c188315f975"} Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.921069 4953 scope.go:117] "RemoveContainer" containerID="6dce8eb176193ccecee5255cafd0e1eb217f90dfb1589a0ed950fa4805d0d7df" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.921108 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.922923 4953 generic.go:334] "Generic (PLEG): container finished" podID="be696be4-2c84-4434-9d93-804c9bb6604b" containerID="2ebef613bdd3641e48e660106ae77eea9c77a66749206f75b6dca89698ab9327" exitCode=0 Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.922951 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" event={"ID":"be696be4-2c84-4434-9d93-804c9bb6604b","Type":"ContainerDied","Data":"2ebef613bdd3641e48e660106ae77eea9c77a66749206f75b6dca89698ab9327"} Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.922968 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" event={"ID":"be696be4-2c84-4434-9d93-804c9bb6604b","Type":"ContainerDied","Data":"093f25019f69e073392e5bc93fca27fab07277a38c89d7577aeff999706e816f"} Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.923024 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vtp7m" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.943332 4953 scope.go:117] "RemoveContainer" containerID="6dce8eb176193ccecee5255cafd0e1eb217f90dfb1589a0ed950fa4805d0d7df" Feb 23 00:11:56 crc kubenswrapper[4953]: E0223 00:11:56.944610 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dce8eb176193ccecee5255cafd0e1eb217f90dfb1589a0ed950fa4805d0d7df\": container with ID starting with 6dce8eb176193ccecee5255cafd0e1eb217f90dfb1589a0ed950fa4805d0d7df not found: ID does not exist" containerID="6dce8eb176193ccecee5255cafd0e1eb217f90dfb1589a0ed950fa4805d0d7df" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.944647 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dce8eb176193ccecee5255cafd0e1eb217f90dfb1589a0ed950fa4805d0d7df"} err="failed to get container status \"6dce8eb176193ccecee5255cafd0e1eb217f90dfb1589a0ed950fa4805d0d7df\": rpc error: code = NotFound desc = could not find container \"6dce8eb176193ccecee5255cafd0e1eb217f90dfb1589a0ed950fa4805d0d7df\": container with ID starting with 6dce8eb176193ccecee5255cafd0e1eb217f90dfb1589a0ed950fa4805d0d7df not found: ID does not exist" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.944671 4953 scope.go:117] "RemoveContainer" containerID="2ebef613bdd3641e48e660106ae77eea9c77a66749206f75b6dca89698ab9327" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.951972 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vtp7m"] Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.956537 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vtp7m"] Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.965953 4953 scope.go:117] "RemoveContainer" containerID="2ebef613bdd3641e48e660106ae77eea9c77a66749206f75b6dca89698ab9327" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.966121 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc"] Feb 23 00:11:56 crc kubenswrapper[4953]: E0223 00:11:56.966691 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ebef613bdd3641e48e660106ae77eea9c77a66749206f75b6dca89698ab9327\": container with ID starting with 2ebef613bdd3641e48e660106ae77eea9c77a66749206f75b6dca89698ab9327 not found: ID does not exist" containerID="2ebef613bdd3641e48e660106ae77eea9c77a66749206f75b6dca89698ab9327" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.966745 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ebef613bdd3641e48e660106ae77eea9c77a66749206f75b6dca89698ab9327"} err="failed to get container status \"2ebef613bdd3641e48e660106ae77eea9c77a66749206f75b6dca89698ab9327\": rpc error: code = NotFound desc = could not find container \"2ebef613bdd3641e48e660106ae77eea9c77a66749206f75b6dca89698ab9327\": container with ID starting with 2ebef613bdd3641e48e660106ae77eea9c77a66749206f75b6dca89698ab9327 not found: ID does not exist" Feb 23 00:11:56 crc kubenswrapper[4953]: I0223 00:11:56.969538 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zblkc"] Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.332642 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae22cab2-d791-4513-8794-e5d93b7447e5" path="/var/lib/kubelet/pods/ae22cab2-d791-4513-8794-e5d93b7447e5/volumes" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.333412 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be696be4-2c84-4434-9d93-804c9bb6604b" path="/var/lib/kubelet/pods/be696be4-2c84-4434-9d93-804c9bb6604b/volumes" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.895900 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k"] Feb 23 00:11:57 crc kubenswrapper[4953]: E0223 00:11:57.896243 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be696be4-2c84-4434-9d93-804c9bb6604b" containerName="controller-manager" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.896269 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="be696be4-2c84-4434-9d93-804c9bb6604b" containerName="controller-manager" Feb 23 00:11:57 crc kubenswrapper[4953]: E0223 00:11:57.896296 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.896304 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 23 00:11:57 crc kubenswrapper[4953]: E0223 00:11:57.896313 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae22cab2-d791-4513-8794-e5d93b7447e5" containerName="route-controller-manager" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.896319 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae22cab2-d791-4513-8794-e5d93b7447e5" containerName="route-controller-manager" Feb 23 00:11:57 crc kubenswrapper[4953]: E0223 00:11:57.896329 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" containerName="installer" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.896336 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" containerName="installer" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.896457 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9e4ec4c-763f-42cd-9e76-7f4c3eb9e263" containerName="installer" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.896639 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae22cab2-d791-4513-8794-e5d93b7447e5" containerName="route-controller-manager" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.896676 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="be696be4-2c84-4434-9d93-804c9bb6604b" containerName="controller-manager" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.896685 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.897792 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.900901 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.901464 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.901681 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.902965 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.903264 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.903530 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.910478 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc"] Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.911492 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.916106 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.916263 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.916398 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.916471 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.916879 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.918319 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.920130 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc"] Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.921178 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.922959 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k"] Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.955351 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw2jm\" (UniqueName: \"kubernetes.io/projected/a3faae14-fa99-410b-abe4-db6bf9708ef9-kube-api-access-qw2jm\") pod \"controller-manager-7bc58b8bd6-hgncc\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.955394 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3faae14-fa99-410b-abe4-db6bf9708ef9-proxy-ca-bundles\") pod \"controller-manager-7bc58b8bd6-hgncc\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.955418 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3faae14-fa99-410b-abe4-db6bf9708ef9-client-ca\") pod \"controller-manager-7bc58b8bd6-hgncc\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.955433 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3faae14-fa99-410b-abe4-db6bf9708ef9-config\") pod \"controller-manager-7bc58b8bd6-hgncc\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.955450 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55cr4\" (UniqueName: \"kubernetes.io/projected/143a0c37-56e9-437f-9b22-6892bc142c4b-kube-api-access-55cr4\") pod \"route-controller-manager-566f5c6c9d-7rj5k\" (UID: \"143a0c37-56e9-437f-9b22-6892bc142c4b\") " pod="openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.955487 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/143a0c37-56e9-437f-9b22-6892bc142c4b-client-ca\") pod \"route-controller-manager-566f5c6c9d-7rj5k\" (UID: \"143a0c37-56e9-437f-9b22-6892bc142c4b\") " pod="openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.955508 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3faae14-fa99-410b-abe4-db6bf9708ef9-serving-cert\") pod \"controller-manager-7bc58b8bd6-hgncc\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.955531 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/143a0c37-56e9-437f-9b22-6892bc142c4b-config\") pod \"route-controller-manager-566f5c6c9d-7rj5k\" (UID: \"143a0c37-56e9-437f-9b22-6892bc142c4b\") " pod="openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k" Feb 23 00:11:57 crc kubenswrapper[4953]: I0223 00:11:57.955547 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/143a0c37-56e9-437f-9b22-6892bc142c4b-serving-cert\") pod \"route-controller-manager-566f5c6c9d-7rj5k\" (UID: \"143a0c37-56e9-437f-9b22-6892bc142c4b\") " pod="openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.043540 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc"] Feb 23 00:11:58 crc kubenswrapper[4953]: E0223 00:11:58.044325 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-qw2jm proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" podUID="a3faae14-fa99-410b-abe4-db6bf9708ef9" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.056779 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/143a0c37-56e9-437f-9b22-6892bc142c4b-client-ca\") pod \"route-controller-manager-566f5c6c9d-7rj5k\" (UID: \"143a0c37-56e9-437f-9b22-6892bc142c4b\") " pod="openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.056837 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3faae14-fa99-410b-abe4-db6bf9708ef9-serving-cert\") pod \"controller-manager-7bc58b8bd6-hgncc\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.056868 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/143a0c37-56e9-437f-9b22-6892bc142c4b-config\") pod \"route-controller-manager-566f5c6c9d-7rj5k\" (UID: \"143a0c37-56e9-437f-9b22-6892bc142c4b\") " pod="openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.056888 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/143a0c37-56e9-437f-9b22-6892bc142c4b-serving-cert\") pod \"route-controller-manager-566f5c6c9d-7rj5k\" (UID: \"143a0c37-56e9-437f-9b22-6892bc142c4b\") " pod="openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.056918 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw2jm\" (UniqueName: \"kubernetes.io/projected/a3faae14-fa99-410b-abe4-db6bf9708ef9-kube-api-access-qw2jm\") pod \"controller-manager-7bc58b8bd6-hgncc\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.056942 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3faae14-fa99-410b-abe4-db6bf9708ef9-proxy-ca-bundles\") pod \"controller-manager-7bc58b8bd6-hgncc\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.056960 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3faae14-fa99-410b-abe4-db6bf9708ef9-client-ca\") pod \"controller-manager-7bc58b8bd6-hgncc\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.056975 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3faae14-fa99-410b-abe4-db6bf9708ef9-config\") pod \"controller-manager-7bc58b8bd6-hgncc\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.056990 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55cr4\" (UniqueName: \"kubernetes.io/projected/143a0c37-56e9-437f-9b22-6892bc142c4b-kube-api-access-55cr4\") pod \"route-controller-manager-566f5c6c9d-7rj5k\" (UID: \"143a0c37-56e9-437f-9b22-6892bc142c4b\") " pod="openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.058533 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3faae14-fa99-410b-abe4-db6bf9708ef9-client-ca\") pod \"controller-manager-7bc58b8bd6-hgncc\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.058654 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/143a0c37-56e9-437f-9b22-6892bc142c4b-config\") pod \"route-controller-manager-566f5c6c9d-7rj5k\" (UID: \"143a0c37-56e9-437f-9b22-6892bc142c4b\") " pod="openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.058899 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3faae14-fa99-410b-abe4-db6bf9708ef9-proxy-ca-bundles\") pod \"controller-manager-7bc58b8bd6-hgncc\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.059142 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3faae14-fa99-410b-abe4-db6bf9708ef9-config\") pod \"controller-manager-7bc58b8bd6-hgncc\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.059436 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/143a0c37-56e9-437f-9b22-6892bc142c4b-client-ca\") pod \"route-controller-manager-566f5c6c9d-7rj5k\" (UID: \"143a0c37-56e9-437f-9b22-6892bc142c4b\") " pod="openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.059449 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k"] Feb 23 00:11:58 crc kubenswrapper[4953]: E0223 00:11:58.059925 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-55cr4 serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k" podUID="143a0c37-56e9-437f-9b22-6892bc142c4b" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.061943 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3faae14-fa99-410b-abe4-db6bf9708ef9-serving-cert\") pod \"controller-manager-7bc58b8bd6-hgncc\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.066834 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/143a0c37-56e9-437f-9b22-6892bc142c4b-serving-cert\") pod \"route-controller-manager-566f5c6c9d-7rj5k\" (UID: \"143a0c37-56e9-437f-9b22-6892bc142c4b\") " pod="openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.076951 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55cr4\" (UniqueName: \"kubernetes.io/projected/143a0c37-56e9-437f-9b22-6892bc142c4b-kube-api-access-55cr4\") pod \"route-controller-manager-566f5c6c9d-7rj5k\" (UID: \"143a0c37-56e9-437f-9b22-6892bc142c4b\") " pod="openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.079404 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw2jm\" (UniqueName: \"kubernetes.io/projected/a3faae14-fa99-410b-abe4-db6bf9708ef9-kube-api-access-qw2jm\") pod \"controller-manager-7bc58b8bd6-hgncc\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.943198 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.943215 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.954186 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" Feb 23 00:11:58 crc kubenswrapper[4953]: I0223 00:11:58.960954 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.069064 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw2jm\" (UniqueName: \"kubernetes.io/projected/a3faae14-fa99-410b-abe4-db6bf9708ef9-kube-api-access-qw2jm\") pod \"a3faae14-fa99-410b-abe4-db6bf9708ef9\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.069972 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3faae14-fa99-410b-abe4-db6bf9708ef9-client-ca\") pod \"a3faae14-fa99-410b-abe4-db6bf9708ef9\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.070028 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/143a0c37-56e9-437f-9b22-6892bc142c4b-serving-cert\") pod \"143a0c37-56e9-437f-9b22-6892bc142c4b\" (UID: \"143a0c37-56e9-437f-9b22-6892bc142c4b\") " Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.070054 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/143a0c37-56e9-437f-9b22-6892bc142c4b-config\") pod \"143a0c37-56e9-437f-9b22-6892bc142c4b\" (UID: \"143a0c37-56e9-437f-9b22-6892bc142c4b\") " Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.070127 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55cr4\" (UniqueName: \"kubernetes.io/projected/143a0c37-56e9-437f-9b22-6892bc142c4b-kube-api-access-55cr4\") pod \"143a0c37-56e9-437f-9b22-6892bc142c4b\" (UID: \"143a0c37-56e9-437f-9b22-6892bc142c4b\") " Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.070172 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3faae14-fa99-410b-abe4-db6bf9708ef9-config\") pod \"a3faae14-fa99-410b-abe4-db6bf9708ef9\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.070246 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3faae14-fa99-410b-abe4-db6bf9708ef9-proxy-ca-bundles\") pod \"a3faae14-fa99-410b-abe4-db6bf9708ef9\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.070429 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3faae14-fa99-410b-abe4-db6bf9708ef9-serving-cert\") pod \"a3faae14-fa99-410b-abe4-db6bf9708ef9\" (UID: \"a3faae14-fa99-410b-abe4-db6bf9708ef9\") " Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.070456 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/143a0c37-56e9-437f-9b22-6892bc142c4b-client-ca\") pod \"143a0c37-56e9-437f-9b22-6892bc142c4b\" (UID: \"143a0c37-56e9-437f-9b22-6892bc142c4b\") " Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.070483 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3faae14-fa99-410b-abe4-db6bf9708ef9-client-ca" (OuterVolumeSpecName: "client-ca") pod "a3faae14-fa99-410b-abe4-db6bf9708ef9" (UID: "a3faae14-fa99-410b-abe4-db6bf9708ef9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.070985 4953 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3faae14-fa99-410b-abe4-db6bf9708ef9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.071006 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/143a0c37-56e9-437f-9b22-6892bc142c4b-client-ca" (OuterVolumeSpecName: "client-ca") pod "143a0c37-56e9-437f-9b22-6892bc142c4b" (UID: "143a0c37-56e9-437f-9b22-6892bc142c4b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.070999 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3faae14-fa99-410b-abe4-db6bf9708ef9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a3faae14-fa99-410b-abe4-db6bf9708ef9" (UID: "a3faae14-fa99-410b-abe4-db6bf9708ef9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.071228 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/143a0c37-56e9-437f-9b22-6892bc142c4b-config" (OuterVolumeSpecName: "config") pod "143a0c37-56e9-437f-9b22-6892bc142c4b" (UID: "143a0c37-56e9-437f-9b22-6892bc142c4b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.071264 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3faae14-fa99-410b-abe4-db6bf9708ef9-config" (OuterVolumeSpecName: "config") pod "a3faae14-fa99-410b-abe4-db6bf9708ef9" (UID: "a3faae14-fa99-410b-abe4-db6bf9708ef9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.074958 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/143a0c37-56e9-437f-9b22-6892bc142c4b-kube-api-access-55cr4" (OuterVolumeSpecName: "kube-api-access-55cr4") pod "143a0c37-56e9-437f-9b22-6892bc142c4b" (UID: "143a0c37-56e9-437f-9b22-6892bc142c4b"). InnerVolumeSpecName "kube-api-access-55cr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.075015 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3faae14-fa99-410b-abe4-db6bf9708ef9-kube-api-access-qw2jm" (OuterVolumeSpecName: "kube-api-access-qw2jm") pod "a3faae14-fa99-410b-abe4-db6bf9708ef9" (UID: "a3faae14-fa99-410b-abe4-db6bf9708ef9"). InnerVolumeSpecName "kube-api-access-qw2jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.075031 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/143a0c37-56e9-437f-9b22-6892bc142c4b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "143a0c37-56e9-437f-9b22-6892bc142c4b" (UID: "143a0c37-56e9-437f-9b22-6892bc142c4b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.084059 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3faae14-fa99-410b-abe4-db6bf9708ef9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a3faae14-fa99-410b-abe4-db6bf9708ef9" (UID: "a3faae14-fa99-410b-abe4-db6bf9708ef9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.173011 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw2jm\" (UniqueName: \"kubernetes.io/projected/a3faae14-fa99-410b-abe4-db6bf9708ef9-kube-api-access-qw2jm\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.173052 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/143a0c37-56e9-437f-9b22-6892bc142c4b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.173070 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/143a0c37-56e9-437f-9b22-6892bc142c4b-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.173080 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55cr4\" (UniqueName: \"kubernetes.io/projected/143a0c37-56e9-437f-9b22-6892bc142c4b-kube-api-access-55cr4\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.173089 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3faae14-fa99-410b-abe4-db6bf9708ef9-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.173098 4953 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3faae14-fa99-410b-abe4-db6bf9708ef9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.173109 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3faae14-fa99-410b-abe4-db6bf9708ef9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.173117 4953 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/143a0c37-56e9-437f-9b22-6892bc142c4b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.954713 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k" Feb 23 00:11:59 crc kubenswrapper[4953]: I0223 00:11:59.954750 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:11:59.992823 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2"] Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:11:59.993746 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:11:59.996017 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:11:59.997655 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:11:59.997969 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:11:59.998136 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:11:59.998324 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:11:59.998479 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.003146 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k"] Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.014497 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-566f5c6c9d-7rj5k"] Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.014542 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2"] Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.043645 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc"] Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.073392 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7bc58b8bd6-hgncc"] Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.183021 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-serving-cert\") pod \"route-controller-manager-bbdffdc5f-pdbf2\" (UID: \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\") " pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.183088 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-client-ca\") pod \"route-controller-manager-bbdffdc5f-pdbf2\" (UID: \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\") " pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.183136 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-config\") pod \"route-controller-manager-bbdffdc5f-pdbf2\" (UID: \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\") " pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.183185 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99tpp\" (UniqueName: \"kubernetes.io/projected/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-kube-api-access-99tpp\") pod \"route-controller-manager-bbdffdc5f-pdbf2\" (UID: \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\") " pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.284250 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-serving-cert\") pod \"route-controller-manager-bbdffdc5f-pdbf2\" (UID: \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\") " pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.284326 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-client-ca\") pod \"route-controller-manager-bbdffdc5f-pdbf2\" (UID: \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\") " pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.284377 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-config\") pod \"route-controller-manager-bbdffdc5f-pdbf2\" (UID: \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\") " pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.284402 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99tpp\" (UniqueName: \"kubernetes.io/projected/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-kube-api-access-99tpp\") pod \"route-controller-manager-bbdffdc5f-pdbf2\" (UID: \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\") " pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.285444 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-client-ca\") pod \"route-controller-manager-bbdffdc5f-pdbf2\" (UID: \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\") " pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.285814 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-config\") pod \"route-controller-manager-bbdffdc5f-pdbf2\" (UID: \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\") " pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.288245 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-serving-cert\") pod \"route-controller-manager-bbdffdc5f-pdbf2\" (UID: \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\") " pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.299571 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99tpp\" (UniqueName: \"kubernetes.io/projected/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-kube-api-access-99tpp\") pod \"route-controller-manager-bbdffdc5f-pdbf2\" (UID: \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\") " pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.349414 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.565803 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2"] Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.962398 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" event={"ID":"6a6a78f9-e454-4f1a-94a3-0d2b895970fe","Type":"ContainerStarted","Data":"436b9b6a4f38e88861d87df80a7f0fdac04c6ac6d3ef6d70e6b9cd8afb5f9d86"} Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.962862 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" event={"ID":"6a6a78f9-e454-4f1a-94a3-0d2b895970fe","Type":"ContainerStarted","Data":"0cdf506a41616eb81f8c59bbf731d43975adbed85e132ac9235484d3d085a626"} Feb 23 00:12:00 crc kubenswrapper[4953]: I0223 00:12:00.962878 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" Feb 23 00:12:01 crc kubenswrapper[4953]: I0223 00:12:01.261116 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" Feb 23 00:12:01 crc kubenswrapper[4953]: I0223 00:12:01.283267 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" podStartSLOduration=3.283242127 podStartE2EDuration="3.283242127s" podCreationTimestamp="2026-02-23 00:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:12:00.985648918 +0000 UTC m=+318.919490764" watchObservedRunningTime="2026-02-23 00:12:01.283242127 +0000 UTC m=+319.217083963" Feb 23 00:12:01 crc kubenswrapper[4953]: I0223 00:12:01.333263 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="143a0c37-56e9-437f-9b22-6892bc142c4b" path="/var/lib/kubelet/pods/143a0c37-56e9-437f-9b22-6892bc142c4b/volumes" Feb 23 00:12:01 crc kubenswrapper[4953]: I0223 00:12:01.333669 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3faae14-fa99-410b-abe4-db6bf9708ef9" path="/var/lib/kubelet/pods/a3faae14-fa99-410b-abe4-db6bf9708ef9/volumes" Feb 23 00:12:02 crc kubenswrapper[4953]: I0223 00:12:02.452503 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 23 00:12:02 crc kubenswrapper[4953]: I0223 00:12:02.899376 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c55fb6d95-6d427"] Feb 23 00:12:02 crc kubenswrapper[4953]: I0223 00:12:02.900439 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:02 crc kubenswrapper[4953]: I0223 00:12:02.902582 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 00:12:02 crc kubenswrapper[4953]: I0223 00:12:02.903434 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 00:12:02 crc kubenswrapper[4953]: I0223 00:12:02.903768 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 00:12:02 crc kubenswrapper[4953]: I0223 00:12:02.904828 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 00:12:02 crc kubenswrapper[4953]: I0223 00:12:02.904987 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 00:12:02 crc kubenswrapper[4953]: I0223 00:12:02.904986 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 00:12:02 crc kubenswrapper[4953]: I0223 00:12:02.908169 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c55fb6d95-6d427"] Feb 23 00:12:02 crc kubenswrapper[4953]: I0223 00:12:02.921249 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.025933 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4705c81f-7619-4f4d-a045-3478d3d88860-proxy-ca-bundles\") pod \"controller-manager-7c55fb6d95-6d427\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.025984 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4705c81f-7619-4f4d-a045-3478d3d88860-client-ca\") pod \"controller-manager-7c55fb6d95-6d427\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.026031 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4705c81f-7619-4f4d-a045-3478d3d88860-config\") pod \"controller-manager-7c55fb6d95-6d427\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.026060 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc6nf\" (UniqueName: \"kubernetes.io/projected/4705c81f-7619-4f4d-a045-3478d3d88860-kube-api-access-gc6nf\") pod \"controller-manager-7c55fb6d95-6d427\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.026090 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4705c81f-7619-4f4d-a045-3478d3d88860-serving-cert\") pod \"controller-manager-7c55fb6d95-6d427\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.127074 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4705c81f-7619-4f4d-a045-3478d3d88860-serving-cert\") pod \"controller-manager-7c55fb6d95-6d427\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.127182 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4705c81f-7619-4f4d-a045-3478d3d88860-proxy-ca-bundles\") pod \"controller-manager-7c55fb6d95-6d427\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.127216 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4705c81f-7619-4f4d-a045-3478d3d88860-client-ca\") pod \"controller-manager-7c55fb6d95-6d427\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.127249 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4705c81f-7619-4f4d-a045-3478d3d88860-config\") pod \"controller-manager-7c55fb6d95-6d427\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.127306 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc6nf\" (UniqueName: \"kubernetes.io/projected/4705c81f-7619-4f4d-a045-3478d3d88860-kube-api-access-gc6nf\") pod \"controller-manager-7c55fb6d95-6d427\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.128188 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4705c81f-7619-4f4d-a045-3478d3d88860-client-ca\") pod \"controller-manager-7c55fb6d95-6d427\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.128789 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4705c81f-7619-4f4d-a045-3478d3d88860-config\") pod \"controller-manager-7c55fb6d95-6d427\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.129069 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4705c81f-7619-4f4d-a045-3478d3d88860-proxy-ca-bundles\") pod \"controller-manager-7c55fb6d95-6d427\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.139869 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4705c81f-7619-4f4d-a045-3478d3d88860-serving-cert\") pod \"controller-manager-7c55fb6d95-6d427\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.146085 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc6nf\" (UniqueName: \"kubernetes.io/projected/4705c81f-7619-4f4d-a045-3478d3d88860-kube-api-access-gc6nf\") pod \"controller-manager-7c55fb6d95-6d427\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.224652 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.408877 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c55fb6d95-6d427"] Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.989908 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" event={"ID":"4705c81f-7619-4f4d-a045-3478d3d88860","Type":"ContainerStarted","Data":"07eba78f6f7018e435e9b95b2072a50f2e5799440bf8dfeda88f8fce4590831c"} Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.989975 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" event={"ID":"4705c81f-7619-4f4d-a045-3478d3d88860","Type":"ContainerStarted","Data":"36391fe6b595405aea15f853536b2191aed2b58b33b838abf30b0b0b0841eb1d"} Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.991112 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:03 crc kubenswrapper[4953]: I0223 00:12:03.995977 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:04 crc kubenswrapper[4953]: I0223 00:12:04.012301 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" podStartSLOduration=6.012258798 podStartE2EDuration="6.012258798s" podCreationTimestamp="2026-02-23 00:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:12:04.005144725 +0000 UTC m=+321.938986571" watchObservedRunningTime="2026-02-23 00:12:04.012258798 +0000 UTC m=+321.946100654" Feb 23 00:12:15 crc kubenswrapper[4953]: I0223 00:12:15.886274 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2"] Feb 23 00:12:15 crc kubenswrapper[4953]: I0223 00:12:15.887487 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" podUID="6a6a78f9-e454-4f1a-94a3-0d2b895970fe" containerName="route-controller-manager" containerID="cri-o://436b9b6a4f38e88861d87df80a7f0fdac04c6ac6d3ef6d70e6b9cd8afb5f9d86" gracePeriod=30 Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.059433 4953 generic.go:334] "Generic (PLEG): container finished" podID="6a6a78f9-e454-4f1a-94a3-0d2b895970fe" containerID="436b9b6a4f38e88861d87df80a7f0fdac04c6ac6d3ef6d70e6b9cd8afb5f9d86" exitCode=0 Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.059495 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" event={"ID":"6a6a78f9-e454-4f1a-94a3-0d2b895970fe","Type":"ContainerDied","Data":"436b9b6a4f38e88861d87df80a7f0fdac04c6ac6d3ef6d70e6b9cd8afb5f9d86"} Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.361770 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.444974 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-config\") pod \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\" (UID: \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\") " Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.445059 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-client-ca\") pod \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\" (UID: \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\") " Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.445125 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99tpp\" (UniqueName: \"kubernetes.io/projected/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-kube-api-access-99tpp\") pod \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\" (UID: \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\") " Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.445177 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-serving-cert\") pod \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\" (UID: \"6a6a78f9-e454-4f1a-94a3-0d2b895970fe\") " Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.446033 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-client-ca" (OuterVolumeSpecName: "client-ca") pod "6a6a78f9-e454-4f1a-94a3-0d2b895970fe" (UID: "6a6a78f9-e454-4f1a-94a3-0d2b895970fe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.446038 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-config" (OuterVolumeSpecName: "config") pod "6a6a78f9-e454-4f1a-94a3-0d2b895970fe" (UID: "6a6a78f9-e454-4f1a-94a3-0d2b895970fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.454935 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-kube-api-access-99tpp" (OuterVolumeSpecName: "kube-api-access-99tpp") pod "6a6a78f9-e454-4f1a-94a3-0d2b895970fe" (UID: "6a6a78f9-e454-4f1a-94a3-0d2b895970fe"). InnerVolumeSpecName "kube-api-access-99tpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.456513 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6a6a78f9-e454-4f1a-94a3-0d2b895970fe" (UID: "6a6a78f9-e454-4f1a-94a3-0d2b895970fe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.546380 4953 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.546445 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99tpp\" (UniqueName: \"kubernetes.io/projected/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-kube-api-access-99tpp\") on node \"crc\" DevicePath \"\"" Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.546461 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.546522 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6a78f9-e454-4f1a-94a3-0d2b895970fe-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.917325 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79"] Feb 23 00:12:16 crc kubenswrapper[4953]: E0223 00:12:16.918025 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a6a78f9-e454-4f1a-94a3-0d2b895970fe" containerName="route-controller-manager" Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.918046 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a6a78f9-e454-4f1a-94a3-0d2b895970fe" containerName="route-controller-manager" Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.918172 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a6a78f9-e454-4f1a-94a3-0d2b895970fe" containerName="route-controller-manager" Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.918898 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79" Feb 23 00:12:16 crc kubenswrapper[4953]: I0223 00:12:16.930008 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79"] Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.055260 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbskv\" (UniqueName: \"kubernetes.io/projected/86cc8dd2-46f8-49f8-b578-1e9ce41fc774-kube-api-access-zbskv\") pod \"route-controller-manager-7fc4b9cdc6-5zm79\" (UID: \"86cc8dd2-46f8-49f8-b578-1e9ce41fc774\") " pod="openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79" Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.055479 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86cc8dd2-46f8-49f8-b578-1e9ce41fc774-client-ca\") pod \"route-controller-manager-7fc4b9cdc6-5zm79\" (UID: \"86cc8dd2-46f8-49f8-b578-1e9ce41fc774\") " pod="openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79" Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.055782 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86cc8dd2-46f8-49f8-b578-1e9ce41fc774-serving-cert\") pod \"route-controller-manager-7fc4b9cdc6-5zm79\" (UID: \"86cc8dd2-46f8-49f8-b578-1e9ce41fc774\") " pod="openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79" Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.056102 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86cc8dd2-46f8-49f8-b578-1e9ce41fc774-config\") pod \"route-controller-manager-7fc4b9cdc6-5zm79\" (UID: \"86cc8dd2-46f8-49f8-b578-1e9ce41fc774\") " pod="openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79" Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.067712 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" event={"ID":"6a6a78f9-e454-4f1a-94a3-0d2b895970fe","Type":"ContainerDied","Data":"0cdf506a41616eb81f8c59bbf731d43975adbed85e132ac9235484d3d085a626"} Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.067801 4953 scope.go:117] "RemoveContainer" containerID="436b9b6a4f38e88861d87df80a7f0fdac04c6ac6d3ef6d70e6b9cd8afb5f9d86" Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.067830 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2" Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.104798 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2"] Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.111776 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bbdffdc5f-pdbf2"] Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.157331 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86cc8dd2-46f8-49f8-b578-1e9ce41fc774-serving-cert\") pod \"route-controller-manager-7fc4b9cdc6-5zm79\" (UID: \"86cc8dd2-46f8-49f8-b578-1e9ce41fc774\") " pod="openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79" Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.157401 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86cc8dd2-46f8-49f8-b578-1e9ce41fc774-config\") pod \"route-controller-manager-7fc4b9cdc6-5zm79\" (UID: \"86cc8dd2-46f8-49f8-b578-1e9ce41fc774\") " pod="openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79" Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.157469 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbskv\" (UniqueName: \"kubernetes.io/projected/86cc8dd2-46f8-49f8-b578-1e9ce41fc774-kube-api-access-zbskv\") pod \"route-controller-manager-7fc4b9cdc6-5zm79\" (UID: \"86cc8dd2-46f8-49f8-b578-1e9ce41fc774\") " pod="openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79" Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.157553 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86cc8dd2-46f8-49f8-b578-1e9ce41fc774-client-ca\") pod \"route-controller-manager-7fc4b9cdc6-5zm79\" (UID: \"86cc8dd2-46f8-49f8-b578-1e9ce41fc774\") " pod="openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79" Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.158810 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86cc8dd2-46f8-49f8-b578-1e9ce41fc774-config\") pod \"route-controller-manager-7fc4b9cdc6-5zm79\" (UID: \"86cc8dd2-46f8-49f8-b578-1e9ce41fc774\") " pod="openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79" Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.162373 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86cc8dd2-46f8-49f8-b578-1e9ce41fc774-client-ca\") pod \"route-controller-manager-7fc4b9cdc6-5zm79\" (UID: \"86cc8dd2-46f8-49f8-b578-1e9ce41fc774\") " pod="openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79" Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.171489 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86cc8dd2-46f8-49f8-b578-1e9ce41fc774-serving-cert\") pod \"route-controller-manager-7fc4b9cdc6-5zm79\" (UID: \"86cc8dd2-46f8-49f8-b578-1e9ce41fc774\") " pod="openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79" Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.178364 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbskv\" (UniqueName: \"kubernetes.io/projected/86cc8dd2-46f8-49f8-b578-1e9ce41fc774-kube-api-access-zbskv\") pod \"route-controller-manager-7fc4b9cdc6-5zm79\" (UID: \"86cc8dd2-46f8-49f8-b578-1e9ce41fc774\") " pod="openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79" Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.242070 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79" Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.339855 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a6a78f9-e454-4f1a-94a3-0d2b895970fe" path="/var/lib/kubelet/pods/6a6a78f9-e454-4f1a-94a3-0d2b895970fe/volumes" Feb 23 00:12:17 crc kubenswrapper[4953]: I0223 00:12:17.647159 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79"] Feb 23 00:12:18 crc kubenswrapper[4953]: I0223 00:12:18.089389 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79" event={"ID":"86cc8dd2-46f8-49f8-b578-1e9ce41fc774","Type":"ContainerStarted","Data":"78b69d47ac404d4f505e1be7da3d3869238fdca692ca616c2cad34b528ab654f"} Feb 23 00:12:18 crc kubenswrapper[4953]: I0223 00:12:18.089440 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79" event={"ID":"86cc8dd2-46f8-49f8-b578-1e9ce41fc774","Type":"ContainerStarted","Data":"e83eb953eb88c55b5b4227e69e861f2c07ee221bc18869bab56bd02d2597807d"} Feb 23 00:12:18 crc kubenswrapper[4953]: I0223 00:12:18.089865 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79" Feb 23 00:12:18 crc kubenswrapper[4953]: I0223 00:12:18.095323 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79" Feb 23 00:12:18 crc kubenswrapper[4953]: I0223 00:12:18.111988 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7fc4b9cdc6-5zm79" podStartSLOduration=3.111961259 podStartE2EDuration="3.111961259s" podCreationTimestamp="2026-02-23 00:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:12:18.110239938 +0000 UTC m=+336.044081794" watchObservedRunningTime="2026-02-23 00:12:18.111961259 +0000 UTC m=+336.045803125" Feb 23 00:12:35 crc kubenswrapper[4953]: I0223 00:12:35.860986 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c55fb6d95-6d427"] Feb 23 00:12:35 crc kubenswrapper[4953]: I0223 00:12:35.862142 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" podUID="4705c81f-7619-4f4d-a045-3478d3d88860" containerName="controller-manager" containerID="cri-o://07eba78f6f7018e435e9b95b2072a50f2e5799440bf8dfeda88f8fce4590831c" gracePeriod=30 Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.209067 4953 generic.go:334] "Generic (PLEG): container finished" podID="4705c81f-7619-4f4d-a045-3478d3d88860" containerID="07eba78f6f7018e435e9b95b2072a50f2e5799440bf8dfeda88f8fce4590831c" exitCode=0 Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.209119 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" event={"ID":"4705c81f-7619-4f4d-a045-3478d3d88860","Type":"ContainerDied","Data":"07eba78f6f7018e435e9b95b2072a50f2e5799440bf8dfeda88f8fce4590831c"} Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.311475 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.439747 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc6nf\" (UniqueName: \"kubernetes.io/projected/4705c81f-7619-4f4d-a045-3478d3d88860-kube-api-access-gc6nf\") pod \"4705c81f-7619-4f4d-a045-3478d3d88860\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.439848 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4705c81f-7619-4f4d-a045-3478d3d88860-proxy-ca-bundles\") pod \"4705c81f-7619-4f4d-a045-3478d3d88860\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.439896 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4705c81f-7619-4f4d-a045-3478d3d88860-config\") pod \"4705c81f-7619-4f4d-a045-3478d3d88860\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.439973 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4705c81f-7619-4f4d-a045-3478d3d88860-client-ca\") pod \"4705c81f-7619-4f4d-a045-3478d3d88860\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.440000 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4705c81f-7619-4f4d-a045-3478d3d88860-serving-cert\") pod \"4705c81f-7619-4f4d-a045-3478d3d88860\" (UID: \"4705c81f-7619-4f4d-a045-3478d3d88860\") " Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.441043 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4705c81f-7619-4f4d-a045-3478d3d88860-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4705c81f-7619-4f4d-a045-3478d3d88860" (UID: "4705c81f-7619-4f4d-a045-3478d3d88860"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.441138 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4705c81f-7619-4f4d-a045-3478d3d88860-config" (OuterVolumeSpecName: "config") pod "4705c81f-7619-4f4d-a045-3478d3d88860" (UID: "4705c81f-7619-4f4d-a045-3478d3d88860"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.441208 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4705c81f-7619-4f4d-a045-3478d3d88860-client-ca" (OuterVolumeSpecName: "client-ca") pod "4705c81f-7619-4f4d-a045-3478d3d88860" (UID: "4705c81f-7619-4f4d-a045-3478d3d88860"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.445152 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4705c81f-7619-4f4d-a045-3478d3d88860-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4705c81f-7619-4f4d-a045-3478d3d88860" (UID: "4705c81f-7619-4f4d-a045-3478d3d88860"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.445361 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4705c81f-7619-4f4d-a045-3478d3d88860-kube-api-access-gc6nf" (OuterVolumeSpecName: "kube-api-access-gc6nf") pod "4705c81f-7619-4f4d-a045-3478d3d88860" (UID: "4705c81f-7619-4f4d-a045-3478d3d88860"). InnerVolumeSpecName "kube-api-access-gc6nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.541237 4953 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4705c81f-7619-4f4d-a045-3478d3d88860-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.541276 4953 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4705c81f-7619-4f4d-a045-3478d3d88860-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.541333 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc6nf\" (UniqueName: \"kubernetes.io/projected/4705c81f-7619-4f4d-a045-3478d3d88860-kube-api-access-gc6nf\") on node \"crc\" DevicePath \"\"" Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.541349 4953 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4705c81f-7619-4f4d-a045-3478d3d88860-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.541362 4953 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4705c81f-7619-4f4d-a045-3478d3d88860-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.930254 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78d5f568f5-krnq7"] Feb 23 00:12:36 crc kubenswrapper[4953]: E0223 00:12:36.930525 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4705c81f-7619-4f4d-a045-3478d3d88860" containerName="controller-manager" Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.930541 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="4705c81f-7619-4f4d-a045-3478d3d88860" containerName="controller-manager" Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.930635 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="4705c81f-7619-4f4d-a045-3478d3d88860" containerName="controller-manager" Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.931087 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.940118 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78d5f568f5-krnq7"] Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.956746 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmvf6\" (UniqueName: \"kubernetes.io/projected/26ff5d2d-a303-498d-b77f-92cdbbf37da1-kube-api-access-tmvf6\") pod \"controller-manager-78d5f568f5-krnq7\" (UID: \"26ff5d2d-a303-498d-b77f-92cdbbf37da1\") " pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.956801 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26ff5d2d-a303-498d-b77f-92cdbbf37da1-client-ca\") pod \"controller-manager-78d5f568f5-krnq7\" (UID: \"26ff5d2d-a303-498d-b77f-92cdbbf37da1\") " pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.956827 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26ff5d2d-a303-498d-b77f-92cdbbf37da1-serving-cert\") pod \"controller-manager-78d5f568f5-krnq7\" (UID: \"26ff5d2d-a303-498d-b77f-92cdbbf37da1\") " pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.956905 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26ff5d2d-a303-498d-b77f-92cdbbf37da1-proxy-ca-bundles\") pod \"controller-manager-78d5f568f5-krnq7\" (UID: \"26ff5d2d-a303-498d-b77f-92cdbbf37da1\") " pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" Feb 23 00:12:36 crc kubenswrapper[4953]: I0223 00:12:36.956966 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26ff5d2d-a303-498d-b77f-92cdbbf37da1-config\") pod \"controller-manager-78d5f568f5-krnq7\" (UID: \"26ff5d2d-a303-498d-b77f-92cdbbf37da1\") " pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" Feb 23 00:12:37 crc kubenswrapper[4953]: I0223 00:12:37.058205 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26ff5d2d-a303-498d-b77f-92cdbbf37da1-config\") pod \"controller-manager-78d5f568f5-krnq7\" (UID: \"26ff5d2d-a303-498d-b77f-92cdbbf37da1\") " pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" Feb 23 00:12:37 crc kubenswrapper[4953]: I0223 00:12:37.058316 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmvf6\" (UniqueName: \"kubernetes.io/projected/26ff5d2d-a303-498d-b77f-92cdbbf37da1-kube-api-access-tmvf6\") pod \"controller-manager-78d5f568f5-krnq7\" (UID: \"26ff5d2d-a303-498d-b77f-92cdbbf37da1\") " pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" Feb 23 00:12:37 crc kubenswrapper[4953]: I0223 00:12:37.058365 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26ff5d2d-a303-498d-b77f-92cdbbf37da1-client-ca\") pod \"controller-manager-78d5f568f5-krnq7\" (UID: \"26ff5d2d-a303-498d-b77f-92cdbbf37da1\") " pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" Feb 23 00:12:37 crc kubenswrapper[4953]: I0223 00:12:37.058403 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26ff5d2d-a303-498d-b77f-92cdbbf37da1-serving-cert\") pod \"controller-manager-78d5f568f5-krnq7\" (UID: \"26ff5d2d-a303-498d-b77f-92cdbbf37da1\") " pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" Feb 23 00:12:37 crc kubenswrapper[4953]: I0223 00:12:37.058451 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26ff5d2d-a303-498d-b77f-92cdbbf37da1-proxy-ca-bundles\") pod \"controller-manager-78d5f568f5-krnq7\" (UID: \"26ff5d2d-a303-498d-b77f-92cdbbf37da1\") " pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" Feb 23 00:12:37 crc kubenswrapper[4953]: I0223 00:12:37.059770 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26ff5d2d-a303-498d-b77f-92cdbbf37da1-client-ca\") pod \"controller-manager-78d5f568f5-krnq7\" (UID: \"26ff5d2d-a303-498d-b77f-92cdbbf37da1\") " pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" Feb 23 00:12:37 crc kubenswrapper[4953]: I0223 00:12:37.059806 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26ff5d2d-a303-498d-b77f-92cdbbf37da1-config\") pod \"controller-manager-78d5f568f5-krnq7\" (UID: \"26ff5d2d-a303-498d-b77f-92cdbbf37da1\") " pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" Feb 23 00:12:37 crc kubenswrapper[4953]: I0223 00:12:37.060158 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26ff5d2d-a303-498d-b77f-92cdbbf37da1-proxy-ca-bundles\") pod \"controller-manager-78d5f568f5-krnq7\" (UID: \"26ff5d2d-a303-498d-b77f-92cdbbf37da1\") " pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" Feb 23 00:12:37 crc kubenswrapper[4953]: I0223 00:12:37.063533 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26ff5d2d-a303-498d-b77f-92cdbbf37da1-serving-cert\") pod \"controller-manager-78d5f568f5-krnq7\" (UID: \"26ff5d2d-a303-498d-b77f-92cdbbf37da1\") " pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" Feb 23 00:12:37 crc kubenswrapper[4953]: I0223 00:12:37.074464 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmvf6\" (UniqueName: \"kubernetes.io/projected/26ff5d2d-a303-498d-b77f-92cdbbf37da1-kube-api-access-tmvf6\") pod \"controller-manager-78d5f568f5-krnq7\" (UID: \"26ff5d2d-a303-498d-b77f-92cdbbf37da1\") " pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" Feb 23 00:12:37 crc kubenswrapper[4953]: I0223 00:12:37.215836 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" event={"ID":"4705c81f-7619-4f4d-a045-3478d3d88860","Type":"ContainerDied","Data":"36391fe6b595405aea15f853536b2191aed2b58b33b838abf30b0b0b0841eb1d"} Feb 23 00:12:37 crc kubenswrapper[4953]: I0223 00:12:37.215901 4953 scope.go:117] "RemoveContainer" containerID="07eba78f6f7018e435e9b95b2072a50f2e5799440bf8dfeda88f8fce4590831c" Feb 23 00:12:37 crc kubenswrapper[4953]: I0223 00:12:37.215937 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c55fb6d95-6d427" Feb 23 00:12:37 crc kubenswrapper[4953]: I0223 00:12:37.241947 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c55fb6d95-6d427"] Feb 23 00:12:37 crc kubenswrapper[4953]: I0223 00:12:37.246064 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c55fb6d95-6d427"] Feb 23 00:12:37 crc kubenswrapper[4953]: I0223 00:12:37.250009 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" Feb 23 00:12:37 crc kubenswrapper[4953]: I0223 00:12:37.345466 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4705c81f-7619-4f4d-a045-3478d3d88860" path="/var/lib/kubelet/pods/4705c81f-7619-4f4d-a045-3478d3d88860/volumes" Feb 23 00:12:37 crc kubenswrapper[4953]: I0223 00:12:37.631236 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78d5f568f5-krnq7"] Feb 23 00:12:37 crc kubenswrapper[4953]: W0223 00:12:37.640275 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26ff5d2d_a303_498d_b77f_92cdbbf37da1.slice/crio-ca830465eeb2f4a3533a0ec1f905850965251d5e5bd86efd789508ada2554ead WatchSource:0}: Error finding container ca830465eeb2f4a3533a0ec1f905850965251d5e5bd86efd789508ada2554ead: Status 404 returned error can't find the container with id ca830465eeb2f4a3533a0ec1f905850965251d5e5bd86efd789508ada2554ead Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.224269 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" event={"ID":"26ff5d2d-a303-498d-b77f-92cdbbf37da1","Type":"ContainerStarted","Data":"0ef8f74a7b9a14af1a765956ee5ff1f612ddc90152a3fd60c9bdac13406f1fff"} Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.224738 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" event={"ID":"26ff5d2d-a303-498d-b77f-92cdbbf37da1","Type":"ContainerStarted","Data":"ca830465eeb2f4a3533a0ec1f905850965251d5e5bd86efd789508ada2554ead"} Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.224756 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.230658 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.240337 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-78d5f568f5-krnq7" podStartSLOduration=3.240316556 podStartE2EDuration="3.240316556s" podCreationTimestamp="2026-02-23 00:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:12:38.237800453 +0000 UTC m=+356.171642309" watchObservedRunningTime="2026-02-23 00:12:38.240316556 +0000 UTC m=+356.174158402" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.381568 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hsj6x"] Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.382274 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.398497 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hsj6x"] Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.479315 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/35387d97-e47e-4eb3-8009-a3cba5d3325b-registry-certificates\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.479366 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xl46\" (UniqueName: \"kubernetes.io/projected/35387d97-e47e-4eb3-8009-a3cba5d3325b-kube-api-access-7xl46\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.479651 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/35387d97-e47e-4eb3-8009-a3cba5d3325b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.479709 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35387d97-e47e-4eb3-8009-a3cba5d3325b-registry-tls\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.479737 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35387d97-e47e-4eb3-8009-a3cba5d3325b-trusted-ca\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.479797 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/35387d97-e47e-4eb3-8009-a3cba5d3325b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.479839 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35387d97-e47e-4eb3-8009-a3cba5d3325b-bound-sa-token\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.479923 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.500894 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.580681 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/35387d97-e47e-4eb3-8009-a3cba5d3325b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.580736 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35387d97-e47e-4eb3-8009-a3cba5d3325b-registry-tls\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.580755 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35387d97-e47e-4eb3-8009-a3cba5d3325b-trusted-ca\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.580778 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/35387d97-e47e-4eb3-8009-a3cba5d3325b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.580799 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35387d97-e47e-4eb3-8009-a3cba5d3325b-bound-sa-token\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.580830 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/35387d97-e47e-4eb3-8009-a3cba5d3325b-registry-certificates\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.580849 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xl46\" (UniqueName: \"kubernetes.io/projected/35387d97-e47e-4eb3-8009-a3cba5d3325b-kube-api-access-7xl46\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.581774 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/35387d97-e47e-4eb3-8009-a3cba5d3325b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.582567 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35387d97-e47e-4eb3-8009-a3cba5d3325b-trusted-ca\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.582839 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/35387d97-e47e-4eb3-8009-a3cba5d3325b-registry-certificates\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.588482 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35387d97-e47e-4eb3-8009-a3cba5d3325b-registry-tls\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.588586 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/35387d97-e47e-4eb3-8009-a3cba5d3325b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.597524 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35387d97-e47e-4eb3-8009-a3cba5d3325b-bound-sa-token\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.598481 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xl46\" (UniqueName: \"kubernetes.io/projected/35387d97-e47e-4eb3-8009-a3cba5d3325b-kube-api-access-7xl46\") pod \"image-registry-66df7c8f76-hsj6x\" (UID: \"35387d97-e47e-4eb3-8009-a3cba5d3325b\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:38 crc kubenswrapper[4953]: I0223 00:12:38.698494 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:39 crc kubenswrapper[4953]: I0223 00:12:39.022445 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hsj6x"] Feb 23 00:12:39 crc kubenswrapper[4953]: I0223 00:12:39.233812 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" event={"ID":"35387d97-e47e-4eb3-8009-a3cba5d3325b","Type":"ContainerStarted","Data":"cf9a632bc2da85a7b7dbb38d70fd3303d9bce1ea17c201946523ba91a7b8359f"} Feb 23 00:12:39 crc kubenswrapper[4953]: I0223 00:12:39.235670 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" event={"ID":"35387d97-e47e-4eb3-8009-a3cba5d3325b","Type":"ContainerStarted","Data":"9d16aedaa341a8a800abe9a1d623250d398aee88b599f01ae69027f11a125b1d"} Feb 23 00:12:39 crc kubenswrapper[4953]: I0223 00:12:39.235855 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:39 crc kubenswrapper[4953]: I0223 00:12:39.255359 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" podStartSLOduration=1.255280204 podStartE2EDuration="1.255280204s" podCreationTimestamp="2026-02-23 00:12:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:12:39.253470131 +0000 UTC m=+357.187311977" watchObservedRunningTime="2026-02-23 00:12:39.255280204 +0000 UTC m=+357.189122060" Feb 23 00:12:44 crc kubenswrapper[4953]: I0223 00:12:44.700631 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:12:44 crc kubenswrapper[4953]: I0223 00:12:44.701096 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:12:58 crc kubenswrapper[4953]: I0223 00:12:58.705998 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-hsj6x" Feb 23 00:12:58 crc kubenswrapper[4953]: I0223 00:12:58.804729 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vwcpz"] Feb 23 00:13:14 crc kubenswrapper[4953]: I0223 00:13:14.699916 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:13:14 crc kubenswrapper[4953]: I0223 00:13:14.700707 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:13:23 crc kubenswrapper[4953]: I0223 00:13:23.856632 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" podUID="0f0a47df-1b8c-4e49-bbd7-1c55b257f918" containerName="registry" containerID="cri-o://1f8f2479e20789250533effb5c56af5671befcee720324d26f5454dcde5235a9" gracePeriod=30 Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.274021 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.373122 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-bound-sa-token\") pod \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.373251 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-trusted-ca\") pod \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.373607 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.373707 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-registry-tls\") pod \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.373738 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-registry-certificates\") pod \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.373762 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5w42\" (UniqueName: \"kubernetes.io/projected/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-kube-api-access-d5w42\") pod \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.373922 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-installation-pull-secrets\") pod \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.373989 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-ca-trust-extracted\") pod \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\" (UID: \"0f0a47df-1b8c-4e49-bbd7-1c55b257f918\") " Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.374770 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "0f0a47df-1b8c-4e49-bbd7-1c55b257f918" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.374803 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "0f0a47df-1b8c-4e49-bbd7-1c55b257f918" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.379562 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "0f0a47df-1b8c-4e49-bbd7-1c55b257f918" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.380877 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-kube-api-access-d5w42" (OuterVolumeSpecName: "kube-api-access-d5w42") pod "0f0a47df-1b8c-4e49-bbd7-1c55b257f918" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918"). InnerVolumeSpecName "kube-api-access-d5w42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.385512 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "0f0a47df-1b8c-4e49-bbd7-1c55b257f918" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.388614 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "0f0a47df-1b8c-4e49-bbd7-1c55b257f918" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.392089 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "0f0a47df-1b8c-4e49-bbd7-1c55b257f918" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.392322 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "0f0a47df-1b8c-4e49-bbd7-1c55b257f918" (UID: "0f0a47df-1b8c-4e49-bbd7-1c55b257f918"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.475594 4953 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.475633 4953 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.475648 4953 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.475661 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5w42\" (UniqueName: \"kubernetes.io/projected/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-kube-api-access-d5w42\") on node \"crc\" DevicePath \"\"" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.475673 4953 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.475684 4953 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.475696 4953 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f0a47df-1b8c-4e49-bbd7-1c55b257f918-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.524156 4953 generic.go:334] "Generic (PLEG): container finished" podID="0f0a47df-1b8c-4e49-bbd7-1c55b257f918" containerID="1f8f2479e20789250533effb5c56af5671befcee720324d26f5454dcde5235a9" exitCode=0 Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.524195 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" event={"ID":"0f0a47df-1b8c-4e49-bbd7-1c55b257f918","Type":"ContainerDied","Data":"1f8f2479e20789250533effb5c56af5671befcee720324d26f5454dcde5235a9"} Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.524214 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.524231 4953 scope.go:117] "RemoveContainer" containerID="1f8f2479e20789250533effb5c56af5671befcee720324d26f5454dcde5235a9" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.524220 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vwcpz" event={"ID":"0f0a47df-1b8c-4e49-bbd7-1c55b257f918","Type":"ContainerDied","Data":"6abf5b9ad444d5e8cffddc69867891c3785d1daa7f6e6ed6b2692e7956da574a"} Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.539364 4953 scope.go:117] "RemoveContainer" containerID="1f8f2479e20789250533effb5c56af5671befcee720324d26f5454dcde5235a9" Feb 23 00:13:24 crc kubenswrapper[4953]: E0223 00:13:24.539773 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f8f2479e20789250533effb5c56af5671befcee720324d26f5454dcde5235a9\": container with ID starting with 1f8f2479e20789250533effb5c56af5671befcee720324d26f5454dcde5235a9 not found: ID does not exist" containerID="1f8f2479e20789250533effb5c56af5671befcee720324d26f5454dcde5235a9" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.539867 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f8f2479e20789250533effb5c56af5671befcee720324d26f5454dcde5235a9"} err="failed to get container status \"1f8f2479e20789250533effb5c56af5671befcee720324d26f5454dcde5235a9\": rpc error: code = NotFound desc = could not find container \"1f8f2479e20789250533effb5c56af5671befcee720324d26f5454dcde5235a9\": container with ID starting with 1f8f2479e20789250533effb5c56af5671befcee720324d26f5454dcde5235a9 not found: ID does not exist" Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.555928 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vwcpz"] Feb 23 00:13:24 crc kubenswrapper[4953]: I0223 00:13:24.559180 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vwcpz"] Feb 23 00:13:25 crc kubenswrapper[4953]: I0223 00:13:25.332722 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f0a47df-1b8c-4e49-bbd7-1c55b257f918" path="/var/lib/kubelet/pods/0f0a47df-1b8c-4e49-bbd7-1c55b257f918/volumes" Feb 23 00:13:44 crc kubenswrapper[4953]: I0223 00:13:44.700721 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:13:44 crc kubenswrapper[4953]: I0223 00:13:44.703226 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:13:44 crc kubenswrapper[4953]: I0223 00:13:44.703542 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:13:44 crc kubenswrapper[4953]: I0223 00:13:44.704775 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e85dfaed3628c17b672280ee0d620d6df9b175b1e8985b9cad3e96240e250b5d"} pod="openshift-machine-config-operator/machine-config-daemon-gpl86" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 00:13:44 crc kubenswrapper[4953]: I0223 00:13:44.705107 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" containerID="cri-o://e85dfaed3628c17b672280ee0d620d6df9b175b1e8985b9cad3e96240e250b5d" gracePeriod=600 Feb 23 00:13:45 crc kubenswrapper[4953]: I0223 00:13:45.672228 4953 generic.go:334] "Generic (PLEG): container finished" podID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerID="e85dfaed3628c17b672280ee0d620d6df9b175b1e8985b9cad3e96240e250b5d" exitCode=0 Feb 23 00:13:45 crc kubenswrapper[4953]: I0223 00:13:45.672306 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" event={"ID":"fca7afa5-1274-436d-ab61-9e8796e4774c","Type":"ContainerDied","Data":"e85dfaed3628c17b672280ee0d620d6df9b175b1e8985b9cad3e96240e250b5d"} Feb 23 00:13:45 crc kubenswrapper[4953]: I0223 00:13:45.673087 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" event={"ID":"fca7afa5-1274-436d-ab61-9e8796e4774c","Type":"ContainerStarted","Data":"6ebcf43b0252f85e33937cd0598c95271871ca36e48dd4c099739b60157b6192"} Feb 23 00:13:45 crc kubenswrapper[4953]: I0223 00:13:45.673162 4953 scope.go:117] "RemoveContainer" containerID="78b6bdbed5bdeae065489da740e2d9a85c7cefd441694cfd8532ac7292251c08" Feb 23 00:15:00 crc kubenswrapper[4953]: I0223 00:15:00.169191 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc"] Feb 23 00:15:00 crc kubenswrapper[4953]: E0223 00:15:00.170005 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f0a47df-1b8c-4e49-bbd7-1c55b257f918" containerName="registry" Feb 23 00:15:00 crc kubenswrapper[4953]: I0223 00:15:00.170022 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f0a47df-1b8c-4e49-bbd7-1c55b257f918" containerName="registry" Feb 23 00:15:00 crc kubenswrapper[4953]: I0223 00:15:00.170125 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f0a47df-1b8c-4e49-bbd7-1c55b257f918" containerName="registry" Feb 23 00:15:00 crc kubenswrapper[4953]: I0223 00:15:00.170478 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc" Feb 23 00:15:00 crc kubenswrapper[4953]: I0223 00:15:00.173600 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 00:15:00 crc kubenswrapper[4953]: I0223 00:15:00.173906 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 00:15:00 crc kubenswrapper[4953]: I0223 00:15:00.178728 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc"] Feb 23 00:15:00 crc kubenswrapper[4953]: I0223 00:15:00.355152 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/312f8797-214b-4755-ab63-178d541b0bb1-secret-volume\") pod \"collect-profiles-29530095-pmpmc\" (UID: \"312f8797-214b-4755-ab63-178d541b0bb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc" Feb 23 00:15:00 crc kubenswrapper[4953]: I0223 00:15:00.355233 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2d29\" (UniqueName: \"kubernetes.io/projected/312f8797-214b-4755-ab63-178d541b0bb1-kube-api-access-c2d29\") pod \"collect-profiles-29530095-pmpmc\" (UID: \"312f8797-214b-4755-ab63-178d541b0bb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc" Feb 23 00:15:00 crc kubenswrapper[4953]: I0223 00:15:00.355268 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/312f8797-214b-4755-ab63-178d541b0bb1-config-volume\") pod \"collect-profiles-29530095-pmpmc\" (UID: \"312f8797-214b-4755-ab63-178d541b0bb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc" Feb 23 00:15:00 crc kubenswrapper[4953]: I0223 00:15:00.456560 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2d29\" (UniqueName: \"kubernetes.io/projected/312f8797-214b-4755-ab63-178d541b0bb1-kube-api-access-c2d29\") pod \"collect-profiles-29530095-pmpmc\" (UID: \"312f8797-214b-4755-ab63-178d541b0bb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc" Feb 23 00:15:00 crc kubenswrapper[4953]: I0223 00:15:00.456638 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/312f8797-214b-4755-ab63-178d541b0bb1-config-volume\") pod \"collect-profiles-29530095-pmpmc\" (UID: \"312f8797-214b-4755-ab63-178d541b0bb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc" Feb 23 00:15:00 crc kubenswrapper[4953]: I0223 00:15:00.456697 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/312f8797-214b-4755-ab63-178d541b0bb1-secret-volume\") pod \"collect-profiles-29530095-pmpmc\" (UID: \"312f8797-214b-4755-ab63-178d541b0bb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc" Feb 23 00:15:00 crc kubenswrapper[4953]: I0223 00:15:00.458510 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/312f8797-214b-4755-ab63-178d541b0bb1-config-volume\") pod \"collect-profiles-29530095-pmpmc\" (UID: \"312f8797-214b-4755-ab63-178d541b0bb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc" Feb 23 00:15:00 crc kubenswrapper[4953]: I0223 00:15:00.466182 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/312f8797-214b-4755-ab63-178d541b0bb1-secret-volume\") pod \"collect-profiles-29530095-pmpmc\" (UID: \"312f8797-214b-4755-ab63-178d541b0bb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc" Feb 23 00:15:00 crc kubenswrapper[4953]: I0223 00:15:00.482964 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2d29\" (UniqueName: \"kubernetes.io/projected/312f8797-214b-4755-ab63-178d541b0bb1-kube-api-access-c2d29\") pod \"collect-profiles-29530095-pmpmc\" (UID: \"312f8797-214b-4755-ab63-178d541b0bb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc" Feb 23 00:15:00 crc kubenswrapper[4953]: I0223 00:15:00.490590 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc" Feb 23 00:15:00 crc kubenswrapper[4953]: I0223 00:15:00.915925 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc"] Feb 23 00:15:01 crc kubenswrapper[4953]: I0223 00:15:01.177059 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc" event={"ID":"312f8797-214b-4755-ab63-178d541b0bb1","Type":"ContainerStarted","Data":"8e88e3b03c75579e0006a23881c6a0ba7038765ce05abba399d7d9840fb62477"} Feb 23 00:15:01 crc kubenswrapper[4953]: I0223 00:15:01.177414 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc" event={"ID":"312f8797-214b-4755-ab63-178d541b0bb1","Type":"ContainerStarted","Data":"8647b308a2eba01cd5831b1668f5bb9c5ff47fe85cbb321dd27ecc668ec4e892"} Feb 23 00:15:01 crc kubenswrapper[4953]: I0223 00:15:01.198265 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc" podStartSLOduration=1.198243762 podStartE2EDuration="1.198243762s" podCreationTimestamp="2026-02-23 00:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:15:01.19707266 +0000 UTC m=+499.130914516" watchObservedRunningTime="2026-02-23 00:15:01.198243762 +0000 UTC m=+499.132085628" Feb 23 00:15:02 crc kubenswrapper[4953]: I0223 00:15:02.188462 4953 generic.go:334] "Generic (PLEG): container finished" podID="312f8797-214b-4755-ab63-178d541b0bb1" containerID="8e88e3b03c75579e0006a23881c6a0ba7038765ce05abba399d7d9840fb62477" exitCode=0 Feb 23 00:15:02 crc kubenswrapper[4953]: I0223 00:15:02.188525 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc" event={"ID":"312f8797-214b-4755-ab63-178d541b0bb1","Type":"ContainerDied","Data":"8e88e3b03c75579e0006a23881c6a0ba7038765ce05abba399d7d9840fb62477"} Feb 23 00:15:03 crc kubenswrapper[4953]: I0223 00:15:03.378438 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc" Feb 23 00:15:03 crc kubenswrapper[4953]: I0223 00:15:03.494863 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/312f8797-214b-4755-ab63-178d541b0bb1-secret-volume\") pod \"312f8797-214b-4755-ab63-178d541b0bb1\" (UID: \"312f8797-214b-4755-ab63-178d541b0bb1\") " Feb 23 00:15:03 crc kubenswrapper[4953]: I0223 00:15:03.494922 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/312f8797-214b-4755-ab63-178d541b0bb1-config-volume\") pod \"312f8797-214b-4755-ab63-178d541b0bb1\" (UID: \"312f8797-214b-4755-ab63-178d541b0bb1\") " Feb 23 00:15:03 crc kubenswrapper[4953]: I0223 00:15:03.494983 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2d29\" (UniqueName: \"kubernetes.io/projected/312f8797-214b-4755-ab63-178d541b0bb1-kube-api-access-c2d29\") pod \"312f8797-214b-4755-ab63-178d541b0bb1\" (UID: \"312f8797-214b-4755-ab63-178d541b0bb1\") " Feb 23 00:15:03 crc kubenswrapper[4953]: I0223 00:15:03.495869 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312f8797-214b-4755-ab63-178d541b0bb1-config-volume" (OuterVolumeSpecName: "config-volume") pod "312f8797-214b-4755-ab63-178d541b0bb1" (UID: "312f8797-214b-4755-ab63-178d541b0bb1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:15:03 crc kubenswrapper[4953]: I0223 00:15:03.500954 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/312f8797-214b-4755-ab63-178d541b0bb1-kube-api-access-c2d29" (OuterVolumeSpecName: "kube-api-access-c2d29") pod "312f8797-214b-4755-ab63-178d541b0bb1" (UID: "312f8797-214b-4755-ab63-178d541b0bb1"). InnerVolumeSpecName "kube-api-access-c2d29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:15:03 crc kubenswrapper[4953]: I0223 00:15:03.501447 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/312f8797-214b-4755-ab63-178d541b0bb1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "312f8797-214b-4755-ab63-178d541b0bb1" (UID: "312f8797-214b-4755-ab63-178d541b0bb1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:15:03 crc kubenswrapper[4953]: I0223 00:15:03.595796 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2d29\" (UniqueName: \"kubernetes.io/projected/312f8797-214b-4755-ab63-178d541b0bb1-kube-api-access-c2d29\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:03 crc kubenswrapper[4953]: I0223 00:15:03.595826 4953 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/312f8797-214b-4755-ab63-178d541b0bb1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:03 crc kubenswrapper[4953]: I0223 00:15:03.595836 4953 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/312f8797-214b-4755-ab63-178d541b0bb1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:04 crc kubenswrapper[4953]: I0223 00:15:04.206338 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc" event={"ID":"312f8797-214b-4755-ab63-178d541b0bb1","Type":"ContainerDied","Data":"8647b308a2eba01cd5831b1668f5bb9c5ff47fe85cbb321dd27ecc668ec4e892"} Feb 23 00:15:04 crc kubenswrapper[4953]: I0223 00:15:04.206396 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8647b308a2eba01cd5831b1668f5bb9c5ff47fe85cbb321dd27ecc668ec4e892" Feb 23 00:15:04 crc kubenswrapper[4953]: I0223 00:15:04.206408 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530095-pmpmc" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.625244 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-69mr8"] Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.626067 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovn-controller" containerID="cri-o://030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a" gracePeriod=30 Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.626103 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="nbdb" containerID="cri-o://484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431" gracePeriod=30 Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.626158 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovn-acl-logging" containerID="cri-o://c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2" gracePeriod=30 Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.626181 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a" gracePeriod=30 Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.626304 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="sbdb" containerID="cri-o://f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0" gracePeriod=30 Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.626206 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="northd" containerID="cri-o://e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0" gracePeriod=30 Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.626146 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="kube-rbac-proxy-node" containerID="cri-o://eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d" gracePeriod=30 Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.656637 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovnkube-controller" containerID="cri-o://2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff" gracePeriod=30 Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.917543 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovnkube-controller/3.log" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.921069 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovn-acl-logging/0.log" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.921607 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovn-controller/0.log" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.922165 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.965765 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b7l8q"] Feb 23 00:15:13 crc kubenswrapper[4953]: E0223 00:15:13.965941 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovnkube-controller" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.965952 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovnkube-controller" Feb 23 00:15:13 crc kubenswrapper[4953]: E0223 00:15:13.965961 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312f8797-214b-4755-ab63-178d541b0bb1" containerName="collect-profiles" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.965966 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="312f8797-214b-4755-ab63-178d541b0bb1" containerName="collect-profiles" Feb 23 00:15:13 crc kubenswrapper[4953]: E0223 00:15:13.965975 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovnkube-controller" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.965981 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovnkube-controller" Feb 23 00:15:13 crc kubenswrapper[4953]: E0223 00:15:13.965989 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="sbdb" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.965994 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="sbdb" Feb 23 00:15:13 crc kubenswrapper[4953]: E0223 00:15:13.966002 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovn-acl-logging" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966009 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovn-acl-logging" Feb 23 00:15:13 crc kubenswrapper[4953]: E0223 00:15:13.966016 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="kubecfg-setup" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966022 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="kubecfg-setup" Feb 23 00:15:13 crc kubenswrapper[4953]: E0223 00:15:13.966029 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="kube-rbac-proxy-node" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966036 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="kube-rbac-proxy-node" Feb 23 00:15:13 crc kubenswrapper[4953]: E0223 00:15:13.966045 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="nbdb" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966051 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="nbdb" Feb 23 00:15:13 crc kubenswrapper[4953]: E0223 00:15:13.966060 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966089 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 00:15:13 crc kubenswrapper[4953]: E0223 00:15:13.966097 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="northd" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966102 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="northd" Feb 23 00:15:13 crc kubenswrapper[4953]: E0223 00:15:13.966111 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovnkube-controller" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966116 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovnkube-controller" Feb 23 00:15:13 crc kubenswrapper[4953]: E0223 00:15:13.966127 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovn-controller" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966133 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovn-controller" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966209 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovnkube-controller" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966218 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovn-controller" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966228 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovn-acl-logging" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966234 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovnkube-controller" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966241 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966248 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="kube-rbac-proxy-node" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966254 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovnkube-controller" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966261 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="312f8797-214b-4755-ab63-178d541b0bb1" containerName="collect-profiles" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966267 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="nbdb" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966274 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="northd" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966279 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="sbdb" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966303 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovnkube-controller" Feb 23 00:15:13 crc kubenswrapper[4953]: E0223 00:15:13.966382 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovnkube-controller" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966388 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovnkube-controller" Feb 23 00:15:13 crc kubenswrapper[4953]: E0223 00:15:13.966397 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovnkube-controller" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966403 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovnkube-controller" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.966485 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" containerName="ovnkube-controller" Feb 23 00:15:13 crc kubenswrapper[4953]: I0223 00:15:13.967920 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120559 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5937f1d2-1966-4337-b099-ad0af539fe11-ovnkube-script-lib\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120612 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-node-log\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120638 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-run-ovn-kubernetes\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120671 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-log-socket\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120711 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-var-lib-cni-networks-ovn-kubernetes\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120738 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5937f1d2-1966-4337-b099-ad0af539fe11-env-overrides\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120764 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-run-openvswitch\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120783 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-run-netns\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120801 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-systemd-units\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120825 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-cni-bin\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120861 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-run-systemd\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120888 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x88gj\" (UniqueName: \"kubernetes.io/projected/5937f1d2-1966-4337-b099-ad0af539fe11-kube-api-access-x88gj\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120809 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-log-socket" (OuterVolumeSpecName: "log-socket") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120902 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120893 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120914 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5937f1d2-1966-4337-b099-ad0af539fe11-ovnkube-config\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120873 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120898 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120860 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.120984 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-var-lib-openvswitch\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121031 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-slash\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121106 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-run-ovn\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121136 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121156 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-etc-openvswitch\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121189 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-slash" (OuterVolumeSpecName: "host-slash") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121210 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121242 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5937f1d2-1966-4337-b099-ad0af539fe11-ovn-node-metrics-cert\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121331 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-cni-netd\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121331 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121380 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-kubelet\") pod \"5937f1d2-1966-4337-b099-ad0af539fe11\" (UID: \"5937f1d2-1966-4337-b099-ad0af539fe11\") " Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121389 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121481 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121442 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121556 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5937f1d2-1966-4337-b099-ad0af539fe11-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121585 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/362f3ebe-2986-4cfc-9c04-fe810d56d078-ovn-node-metrics-cert\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121602 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5937f1d2-1966-4337-b099-ad0af539fe11-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121644 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-run-netns\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121706 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-run-ovn\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121698 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5937f1d2-1966-4337-b099-ad0af539fe11-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121753 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-systemd-units\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121785 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-node-log" (OuterVolumeSpecName: "node-log") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121825 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-node-log\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121876 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/362f3ebe-2986-4cfc-9c04-fe810d56d078-ovnkube-script-lib\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121923 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.121977 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-cni-netd\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122032 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/362f3ebe-2986-4cfc-9c04-fe810d56d078-ovnkube-config\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122053 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-run-openvswitch\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122099 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-slash\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122115 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/362f3ebe-2986-4cfc-9c04-fe810d56d078-env-overrides\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122207 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-run-systemd\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122258 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-var-lib-openvswitch\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122374 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-etc-openvswitch\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122441 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-log-socket\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122502 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krvb2\" (UniqueName: \"kubernetes.io/projected/362f3ebe-2986-4cfc-9c04-fe810d56d078-kube-api-access-krvb2\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122551 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-run-ovn-kubernetes\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122603 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-cni-bin\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122669 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-kubelet\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122755 4953 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5937f1d2-1966-4337-b099-ad0af539fe11-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122782 4953 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122808 4953 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-slash\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122833 4953 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122857 4953 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122883 4953 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122908 4953 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122934 4953 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5937f1d2-1966-4337-b099-ad0af539fe11-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122957 4953 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-node-log\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.122982 4953 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.123008 4953 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-log-socket\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.123033 4953 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.123059 4953 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5937f1d2-1966-4337-b099-ad0af539fe11-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.123083 4953 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.123106 4953 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.123129 4953 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.123151 4953 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.126751 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5937f1d2-1966-4337-b099-ad0af539fe11-kube-api-access-x88gj" (OuterVolumeSpecName: "kube-api-access-x88gj") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "kube-api-access-x88gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.131582 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5937f1d2-1966-4337-b099-ad0af539fe11-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.134373 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "5937f1d2-1966-4337-b099-ad0af539fe11" (UID: "5937f1d2-1966-4337-b099-ad0af539fe11"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224512 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-cni-netd\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224571 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-run-openvswitch\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224603 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/362f3ebe-2986-4cfc-9c04-fe810d56d078-ovnkube-config\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224641 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-slash\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224649 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-cni-netd\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224665 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-run-openvswitch\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224672 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/362f3ebe-2986-4cfc-9c04-fe810d56d078-env-overrides\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224723 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-slash\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224733 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-var-lib-openvswitch\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224757 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-run-systemd\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224784 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-run-systemd\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224787 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-etc-openvswitch\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224812 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-etc-openvswitch\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224758 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-var-lib-openvswitch\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224829 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-log-socket\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224854 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-log-socket\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224862 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krvb2\" (UniqueName: \"kubernetes.io/projected/362f3ebe-2986-4cfc-9c04-fe810d56d078-kube-api-access-krvb2\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224888 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-run-ovn-kubernetes\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224910 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-cni-bin\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224942 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-kubelet\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224961 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-run-netns\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.224979 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/362f3ebe-2986-4cfc-9c04-fe810d56d078-ovn-node-metrics-cert\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.225003 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-run-ovn\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.225023 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-systemd-units\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.225052 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-node-log\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.225073 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.225094 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/362f3ebe-2986-4cfc-9c04-fe810d56d078-ovnkube-script-lib\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.225136 4953 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5937f1d2-1966-4337-b099-ad0af539fe11-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.225148 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x88gj\" (UniqueName: \"kubernetes.io/projected/5937f1d2-1966-4337-b099-ad0af539fe11-kube-api-access-x88gj\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.225160 4953 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5937f1d2-1966-4337-b099-ad0af539fe11-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.225465 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-systemd-units\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.225515 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-run-ovn\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.225546 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-run-ovn-kubernetes\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.225586 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-node-log\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.225464 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-kubelet\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.225603 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-run-netns\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.225658 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.225779 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/362f3ebe-2986-4cfc-9c04-fe810d56d078-env-overrides\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.225867 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/362f3ebe-2986-4cfc-9c04-fe810d56d078-host-cni-bin\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.225790 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/362f3ebe-2986-4cfc-9c04-fe810d56d078-ovnkube-script-lib\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.226266 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/362f3ebe-2986-4cfc-9c04-fe810d56d078-ovnkube-config\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.228107 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/362f3ebe-2986-4cfc-9c04-fe810d56d078-ovn-node-metrics-cert\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.243377 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krvb2\" (UniqueName: \"kubernetes.io/projected/362f3ebe-2986-4cfc-9c04-fe810d56d078-kube-api-access-krvb2\") pod \"ovnkube-node-b7l8q\" (UID: \"362f3ebe-2986-4cfc-9c04-fe810d56d078\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.280689 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.563777 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pxzfb_c6ae22b1-a5f9-483a-be3d-32cfb7d516d5/kube-multus/2.log" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.564268 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pxzfb_c6ae22b1-a5f9-483a-be3d-32cfb7d516d5/kube-multus/1.log" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.564406 4953 generic.go:334] "Generic (PLEG): container finished" podID="c6ae22b1-a5f9-483a-be3d-32cfb7d516d5" containerID="d3656a4f92aeabf073796fdda06705189896687219459312443c8d2846f004d0" exitCode=2 Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.564505 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pxzfb" event={"ID":"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5","Type":"ContainerDied","Data":"d3656a4f92aeabf073796fdda06705189896687219459312443c8d2846f004d0"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.564585 4953 scope.go:117] "RemoveContainer" containerID="47d5a3bedeb2d2baf8bec6932b5cad209fb75c63bbca5f89ad0fba12c6c921ce" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.565726 4953 scope.go:117] "RemoveContainer" containerID="d3656a4f92aeabf073796fdda06705189896687219459312443c8d2846f004d0" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.566272 4953 generic.go:334] "Generic (PLEG): container finished" podID="362f3ebe-2986-4cfc-9c04-fe810d56d078" containerID="72d52a7459427e20b5a6988d8ba51d553064497f79259e4702abebff726d9dd6" exitCode=0 Feb 23 00:15:14 crc kubenswrapper[4953]: E0223 00:15:14.566374 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-pxzfb_openshift-multus(c6ae22b1-a5f9-483a-be3d-32cfb7d516d5)\"" pod="openshift-multus/multus-pxzfb" podUID="c6ae22b1-a5f9-483a-be3d-32cfb7d516d5" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.566327 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" event={"ID":"362f3ebe-2986-4cfc-9c04-fe810d56d078","Type":"ContainerDied","Data":"72d52a7459427e20b5a6988d8ba51d553064497f79259e4702abebff726d9dd6"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.566647 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" event={"ID":"362f3ebe-2986-4cfc-9c04-fe810d56d078","Type":"ContainerStarted","Data":"a5d8fd58484677f1eeef8e4c1f35c1b172414d2953e9e2b5c6f182e6f76754fd"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.569528 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovnkube-controller/3.log" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.575156 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovn-acl-logging/0.log" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.575986 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69mr8_5937f1d2-1966-4337-b099-ad0af539fe11/ovn-controller/0.log" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576396 4953 generic.go:334] "Generic (PLEG): container finished" podID="5937f1d2-1966-4337-b099-ad0af539fe11" containerID="2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff" exitCode=0 Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576424 4953 generic.go:334] "Generic (PLEG): container finished" podID="5937f1d2-1966-4337-b099-ad0af539fe11" containerID="f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0" exitCode=0 Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576436 4953 generic.go:334] "Generic (PLEG): container finished" podID="5937f1d2-1966-4337-b099-ad0af539fe11" containerID="484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431" exitCode=0 Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576446 4953 generic.go:334] "Generic (PLEG): container finished" podID="5937f1d2-1966-4337-b099-ad0af539fe11" containerID="e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0" exitCode=0 Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576456 4953 generic.go:334] "Generic (PLEG): container finished" podID="5937f1d2-1966-4337-b099-ad0af539fe11" containerID="f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a" exitCode=0 Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576465 4953 generic.go:334] "Generic (PLEG): container finished" podID="5937f1d2-1966-4337-b099-ad0af539fe11" containerID="eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d" exitCode=0 Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576476 4953 generic.go:334] "Generic (PLEG): container finished" podID="5937f1d2-1966-4337-b099-ad0af539fe11" containerID="c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2" exitCode=143 Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576485 4953 generic.go:334] "Generic (PLEG): container finished" podID="5937f1d2-1966-4337-b099-ad0af539fe11" containerID="030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a" exitCode=143 Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576509 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerDied","Data":"2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576542 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerDied","Data":"f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576562 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerDied","Data":"484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576578 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerDied","Data":"e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576591 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerDied","Data":"f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576608 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerDied","Data":"eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576621 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576635 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576642 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576650 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576658 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576666 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576673 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576680 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576687 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576693 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576704 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerDied","Data":"c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576716 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576726 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576734 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576645 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576741 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576863 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576877 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576885 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576896 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576902 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576909 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576921 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerDied","Data":"030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576936 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576946 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576952 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576958 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576964 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576970 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576976 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576982 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576989 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.576996 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.577004 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69mr8" event={"ID":"5937f1d2-1966-4337-b099-ad0af539fe11","Type":"ContainerDied","Data":"e84414805283e28d3634d2810893b8d68514dcdab14288a9712f3c6c8ea6ed54"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.577015 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.577023 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.577029 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.577035 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.577043 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.577050 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.577056 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.577063 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.577069 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.577081 4953 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe"} Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.602408 4953 scope.go:117] "RemoveContainer" containerID="2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.639546 4953 scope.go:117] "RemoveContainer" containerID="1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.656653 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-69mr8"] Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.657522 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-69mr8"] Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.667850 4953 scope.go:117] "RemoveContainer" containerID="f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.678994 4953 scope.go:117] "RemoveContainer" containerID="484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.689978 4953 scope.go:117] "RemoveContainer" containerID="e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.703191 4953 scope.go:117] "RemoveContainer" containerID="f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.715349 4953 scope.go:117] "RemoveContainer" containerID="eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.769034 4953 scope.go:117] "RemoveContainer" containerID="c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.792575 4953 scope.go:117] "RemoveContainer" containerID="030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.810614 4953 scope.go:117] "RemoveContainer" containerID="41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.846510 4953 scope.go:117] "RemoveContainer" containerID="2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff" Feb 23 00:15:14 crc kubenswrapper[4953]: E0223 00:15:14.846859 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff\": container with ID starting with 2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff not found: ID does not exist" containerID="2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.846884 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff"} err="failed to get container status \"2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff\": rpc error: code = NotFound desc = could not find container \"2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff\": container with ID starting with 2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.846904 4953 scope.go:117] "RemoveContainer" containerID="1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284" Feb 23 00:15:14 crc kubenswrapper[4953]: E0223 00:15:14.847110 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284\": container with ID starting with 1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284 not found: ID does not exist" containerID="1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.847129 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284"} err="failed to get container status \"1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284\": rpc error: code = NotFound desc = could not find container \"1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284\": container with ID starting with 1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.847143 4953 scope.go:117] "RemoveContainer" containerID="f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0" Feb 23 00:15:14 crc kubenswrapper[4953]: E0223 00:15:14.847403 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\": container with ID starting with f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0 not found: ID does not exist" containerID="f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.847421 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0"} err="failed to get container status \"f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\": rpc error: code = NotFound desc = could not find container \"f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\": container with ID starting with f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.847433 4953 scope.go:117] "RemoveContainer" containerID="484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431" Feb 23 00:15:14 crc kubenswrapper[4953]: E0223 00:15:14.847636 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\": container with ID starting with 484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431 not found: ID does not exist" containerID="484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.847661 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431"} err="failed to get container status \"484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\": rpc error: code = NotFound desc = could not find container \"484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\": container with ID starting with 484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.847676 4953 scope.go:117] "RemoveContainer" containerID="e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0" Feb 23 00:15:14 crc kubenswrapper[4953]: E0223 00:15:14.847885 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\": container with ID starting with e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0 not found: ID does not exist" containerID="e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.847939 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0"} err="failed to get container status \"e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\": rpc error: code = NotFound desc = could not find container \"e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\": container with ID starting with e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.847959 4953 scope.go:117] "RemoveContainer" containerID="f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a" Feb 23 00:15:14 crc kubenswrapper[4953]: E0223 00:15:14.848173 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\": container with ID starting with f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a not found: ID does not exist" containerID="f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.848188 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a"} err="failed to get container status \"f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\": rpc error: code = NotFound desc = could not find container \"f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\": container with ID starting with f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.848200 4953 scope.go:117] "RemoveContainer" containerID="eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d" Feb 23 00:15:14 crc kubenswrapper[4953]: E0223 00:15:14.848404 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\": container with ID starting with eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d not found: ID does not exist" containerID="eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.848424 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d"} err="failed to get container status \"eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\": rpc error: code = NotFound desc = could not find container \"eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\": container with ID starting with eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.848436 4953 scope.go:117] "RemoveContainer" containerID="c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2" Feb 23 00:15:14 crc kubenswrapper[4953]: E0223 00:15:14.848749 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\": container with ID starting with c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2 not found: ID does not exist" containerID="c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.848771 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2"} err="failed to get container status \"c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\": rpc error: code = NotFound desc = could not find container \"c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\": container with ID starting with c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.848786 4953 scope.go:117] "RemoveContainer" containerID="030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a" Feb 23 00:15:14 crc kubenswrapper[4953]: E0223 00:15:14.849007 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\": container with ID starting with 030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a not found: ID does not exist" containerID="030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.849022 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a"} err="failed to get container status \"030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\": rpc error: code = NotFound desc = could not find container \"030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\": container with ID starting with 030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.849034 4953 scope.go:117] "RemoveContainer" containerID="41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe" Feb 23 00:15:14 crc kubenswrapper[4953]: E0223 00:15:14.849234 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\": container with ID starting with 41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe not found: ID does not exist" containerID="41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.849259 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe"} err="failed to get container status \"41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\": rpc error: code = NotFound desc = could not find container \"41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\": container with ID starting with 41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.849273 4953 scope.go:117] "RemoveContainer" containerID="2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.849504 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff"} err="failed to get container status \"2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff\": rpc error: code = NotFound desc = could not find container \"2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff\": container with ID starting with 2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.849526 4953 scope.go:117] "RemoveContainer" containerID="1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.849774 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284"} err="failed to get container status \"1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284\": rpc error: code = NotFound desc = could not find container \"1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284\": container with ID starting with 1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.849789 4953 scope.go:117] "RemoveContainer" containerID="f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.850007 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0"} err="failed to get container status \"f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\": rpc error: code = NotFound desc = could not find container \"f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\": container with ID starting with f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.850024 4953 scope.go:117] "RemoveContainer" containerID="484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.850250 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431"} err="failed to get container status \"484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\": rpc error: code = NotFound desc = could not find container \"484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\": container with ID starting with 484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.850263 4953 scope.go:117] "RemoveContainer" containerID="e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.850518 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0"} err="failed to get container status \"e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\": rpc error: code = NotFound desc = could not find container \"e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\": container with ID starting with e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.850551 4953 scope.go:117] "RemoveContainer" containerID="f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.850766 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a"} err="failed to get container status \"f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\": rpc error: code = NotFound desc = could not find container \"f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\": container with ID starting with f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.850790 4953 scope.go:117] "RemoveContainer" containerID="eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.850993 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d"} err="failed to get container status \"eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\": rpc error: code = NotFound desc = could not find container \"eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\": container with ID starting with eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.851007 4953 scope.go:117] "RemoveContainer" containerID="c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.851210 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2"} err="failed to get container status \"c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\": rpc error: code = NotFound desc = could not find container \"c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\": container with ID starting with c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.851227 4953 scope.go:117] "RemoveContainer" containerID="030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.851471 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a"} err="failed to get container status \"030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\": rpc error: code = NotFound desc = could not find container \"030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\": container with ID starting with 030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.851485 4953 scope.go:117] "RemoveContainer" containerID="41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.851672 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe"} err="failed to get container status \"41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\": rpc error: code = NotFound desc = could not find container \"41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\": container with ID starting with 41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.851684 4953 scope.go:117] "RemoveContainer" containerID="2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.852180 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff"} err="failed to get container status \"2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff\": rpc error: code = NotFound desc = could not find container \"2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff\": container with ID starting with 2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.852195 4953 scope.go:117] "RemoveContainer" containerID="1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.852437 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284"} err="failed to get container status \"1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284\": rpc error: code = NotFound desc = could not find container \"1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284\": container with ID starting with 1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.852451 4953 scope.go:117] "RemoveContainer" containerID="f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.852649 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0"} err="failed to get container status \"f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\": rpc error: code = NotFound desc = could not find container \"f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\": container with ID starting with f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.852663 4953 scope.go:117] "RemoveContainer" containerID="484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.852851 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431"} err="failed to get container status \"484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\": rpc error: code = NotFound desc = could not find container \"484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\": container with ID starting with 484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.852864 4953 scope.go:117] "RemoveContainer" containerID="e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.853041 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0"} err="failed to get container status \"e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\": rpc error: code = NotFound desc = could not find container \"e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\": container with ID starting with e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.853054 4953 scope.go:117] "RemoveContainer" containerID="f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.853263 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a"} err="failed to get container status \"f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\": rpc error: code = NotFound desc = could not find container \"f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\": container with ID starting with f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.853278 4953 scope.go:117] "RemoveContainer" containerID="eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.853471 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d"} err="failed to get container status \"eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\": rpc error: code = NotFound desc = could not find container \"eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\": container with ID starting with eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.853484 4953 scope.go:117] "RemoveContainer" containerID="c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.853838 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2"} err="failed to get container status \"c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\": rpc error: code = NotFound desc = could not find container \"c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\": container with ID starting with c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.853859 4953 scope.go:117] "RemoveContainer" containerID="030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.854067 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a"} err="failed to get container status \"030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\": rpc error: code = NotFound desc = could not find container \"030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\": container with ID starting with 030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.854087 4953 scope.go:117] "RemoveContainer" containerID="41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.854281 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe"} err="failed to get container status \"41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\": rpc error: code = NotFound desc = could not find container \"41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\": container with ID starting with 41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.854311 4953 scope.go:117] "RemoveContainer" containerID="2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.854540 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff"} err="failed to get container status \"2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff\": rpc error: code = NotFound desc = could not find container \"2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff\": container with ID starting with 2a1fa264e9a2731b1f64c214354bbeca42c2d8b596b5407e1e1e3a61284635ff not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.854559 4953 scope.go:117] "RemoveContainer" containerID="1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.854817 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284"} err="failed to get container status \"1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284\": rpc error: code = NotFound desc = could not find container \"1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284\": container with ID starting with 1eff85677d9dc66e0b9d01f22cebfd928638d41683d32b71d9954861ea8f0284 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.854835 4953 scope.go:117] "RemoveContainer" containerID="f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.855048 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0"} err="failed to get container status \"f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\": rpc error: code = NotFound desc = could not find container \"f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0\": container with ID starting with f772a65b4b640ef4ce348d71c16ba52eba63bfd6b63f9445a401d823a9fd57a0 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.855065 4953 scope.go:117] "RemoveContainer" containerID="484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.855281 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431"} err="failed to get container status \"484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\": rpc error: code = NotFound desc = could not find container \"484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431\": container with ID starting with 484d8d1501bfdd234465b4d5b30707351f2d1ff78db85e20b613d0770ebf2431 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.855325 4953 scope.go:117] "RemoveContainer" containerID="e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.855540 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0"} err="failed to get container status \"e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\": rpc error: code = NotFound desc = could not find container \"e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0\": container with ID starting with e17e93388c49fc9dffb99c130de522a9b1a4b5dbb03056d4a841642478edc5a0 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.855558 4953 scope.go:117] "RemoveContainer" containerID="f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.855748 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a"} err="failed to get container status \"f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\": rpc error: code = NotFound desc = could not find container \"f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a\": container with ID starting with f55f0b2c6937096bec23fc1b4311cdcaecb0993065ce09b5fcc2a584ca1eb54a not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.855772 4953 scope.go:117] "RemoveContainer" containerID="eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.855982 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d"} err="failed to get container status \"eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\": rpc error: code = NotFound desc = could not find container \"eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d\": container with ID starting with eb5a230a8bc21523911ca79fe4d27afb5dfda193d9defdccb6452559e697d71d not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.856001 4953 scope.go:117] "RemoveContainer" containerID="c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.856212 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2"} err="failed to get container status \"c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\": rpc error: code = NotFound desc = could not find container \"c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2\": container with ID starting with c5fe86c86ee6bb8dc478179529d2bbb18029caa73d968de59074b8e0569bb7f2 not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.856232 4953 scope.go:117] "RemoveContainer" containerID="030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.856428 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a"} err="failed to get container status \"030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\": rpc error: code = NotFound desc = could not find container \"030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a\": container with ID starting with 030b0426f985060d7f244e1f795079f4dc7db9c1f558946e62afe1a1b0bafd3a not found: ID does not exist" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.856445 4953 scope.go:117] "RemoveContainer" containerID="41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe" Feb 23 00:15:14 crc kubenswrapper[4953]: I0223 00:15:14.856742 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe"} err="failed to get container status \"41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\": rpc error: code = NotFound desc = could not find container \"41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe\": container with ID starting with 41658411cd1d832be04d6109b2be52e5dc737efda43c8932252aff96bd1427fe not found: ID does not exist" Feb 23 00:15:15 crc kubenswrapper[4953]: I0223 00:15:15.336273 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5937f1d2-1966-4337-b099-ad0af539fe11" path="/var/lib/kubelet/pods/5937f1d2-1966-4337-b099-ad0af539fe11/volumes" Feb 23 00:15:15 crc kubenswrapper[4953]: I0223 00:15:15.583231 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pxzfb_c6ae22b1-a5f9-483a-be3d-32cfb7d516d5/kube-multus/2.log" Feb 23 00:15:15 crc kubenswrapper[4953]: I0223 00:15:15.587316 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" event={"ID":"362f3ebe-2986-4cfc-9c04-fe810d56d078","Type":"ContainerStarted","Data":"5d1c38c4760c1a63d698742dace7d33da80592fcc07a3532ebefc543eb6db625"} Feb 23 00:15:15 crc kubenswrapper[4953]: I0223 00:15:15.587349 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" event={"ID":"362f3ebe-2986-4cfc-9c04-fe810d56d078","Type":"ContainerStarted","Data":"bfa58de2772ec1248d0d0c8613be7eb605b76dfb5d996afb0557f62f2cb3cad4"} Feb 23 00:15:15 crc kubenswrapper[4953]: I0223 00:15:15.587364 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" event={"ID":"362f3ebe-2986-4cfc-9c04-fe810d56d078","Type":"ContainerStarted","Data":"0e256ef02512d3b46de505cfc8948ac1112af6e7b08705c454d0f76271848e4d"} Feb 23 00:15:15 crc kubenswrapper[4953]: I0223 00:15:15.587376 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" event={"ID":"362f3ebe-2986-4cfc-9c04-fe810d56d078","Type":"ContainerStarted","Data":"c04bacc6e41b21cecf63d8d1400e2a9a82b3b3d398e3cf2e1d0af2a500088c28"} Feb 23 00:15:15 crc kubenswrapper[4953]: I0223 00:15:15.587388 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" event={"ID":"362f3ebe-2986-4cfc-9c04-fe810d56d078","Type":"ContainerStarted","Data":"6ae659cd98c15323cfdcd51d86e43b2c7a75592eb032bce01489ecd73dbd35cf"} Feb 23 00:15:15 crc kubenswrapper[4953]: I0223 00:15:15.587397 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" event={"ID":"362f3ebe-2986-4cfc-9c04-fe810d56d078","Type":"ContainerStarted","Data":"5d09fe918508d38ac88fe2a1ffc8943a54c9e9e3efac3ca13d433192ca4b2ff7"} Feb 23 00:15:18 crc kubenswrapper[4953]: I0223 00:15:18.613276 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" event={"ID":"362f3ebe-2986-4cfc-9c04-fe810d56d078","Type":"ContainerStarted","Data":"4d207be801293b1ba73529e7798c3cb6488f26edac93954f9ce7d229cbfa1daa"} Feb 23 00:15:20 crc kubenswrapper[4953]: I0223 00:15:20.625845 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" event={"ID":"362f3ebe-2986-4cfc-9c04-fe810d56d078","Type":"ContainerStarted","Data":"77d92523b5d21249e6bad789f3f41072a16eff325240e49260b2a7e3c81b065a"} Feb 23 00:15:20 crc kubenswrapper[4953]: I0223 00:15:20.626272 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:20 crc kubenswrapper[4953]: I0223 00:15:20.626306 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:20 crc kubenswrapper[4953]: I0223 00:15:20.655729 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:20 crc kubenswrapper[4953]: I0223 00:15:20.685029 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" podStartSLOduration=7.685002441 podStartE2EDuration="7.685002441s" podCreationTimestamp="2026-02-23 00:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:15:20.655336113 +0000 UTC m=+518.589177959" watchObservedRunningTime="2026-02-23 00:15:20.685002441 +0000 UTC m=+518.618844277" Feb 23 00:15:21 crc kubenswrapper[4953]: I0223 00:15:21.631408 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:21 crc kubenswrapper[4953]: I0223 00:15:21.660497 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:30 crc kubenswrapper[4953]: I0223 00:15:30.325994 4953 scope.go:117] "RemoveContainer" containerID="d3656a4f92aeabf073796fdda06705189896687219459312443c8d2846f004d0" Feb 23 00:15:30 crc kubenswrapper[4953]: E0223 00:15:30.326808 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-pxzfb_openshift-multus(c6ae22b1-a5f9-483a-be3d-32cfb7d516d5)\"" pod="openshift-multus/multus-pxzfb" podUID="c6ae22b1-a5f9-483a-be3d-32cfb7d516d5" Feb 23 00:15:44 crc kubenswrapper[4953]: I0223 00:15:44.301010 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b7l8q" Feb 23 00:15:44 crc kubenswrapper[4953]: I0223 00:15:44.326223 4953 scope.go:117] "RemoveContainer" containerID="d3656a4f92aeabf073796fdda06705189896687219459312443c8d2846f004d0" Feb 23 00:15:44 crc kubenswrapper[4953]: I0223 00:15:44.765364 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pxzfb_c6ae22b1-a5f9-483a-be3d-32cfb7d516d5/kube-multus/2.log" Feb 23 00:15:44 crc kubenswrapper[4953]: I0223 00:15:44.765423 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pxzfb" event={"ID":"c6ae22b1-a5f9-483a-be3d-32cfb7d516d5","Type":"ContainerStarted","Data":"47a46c281a292b4b3ccc034dc55cc3cebb41dde67e070fe6285561a7366421c8"} Feb 23 00:16:10 crc kubenswrapper[4953]: I0223 00:16:10.885048 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmzrz"] Feb 23 00:16:10 crc kubenswrapper[4953]: I0223 00:16:10.886039 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zmzrz" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" containerName="registry-server" containerID="cri-o://de54988d5da9330b068b9f3b4da467e55278eadde549dd7c5b5c124b0f775489" gracePeriod=30 Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.223900 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmzrz" Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.363726 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff8b616-423d-4b7f-9fb8-2ac91fbef324-utilities\") pod \"3ff8b616-423d-4b7f-9fb8-2ac91fbef324\" (UID: \"3ff8b616-423d-4b7f-9fb8-2ac91fbef324\") " Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.364065 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff8b616-423d-4b7f-9fb8-2ac91fbef324-catalog-content\") pod \"3ff8b616-423d-4b7f-9fb8-2ac91fbef324\" (UID: \"3ff8b616-423d-4b7f-9fb8-2ac91fbef324\") " Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.364153 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9jqw\" (UniqueName: \"kubernetes.io/projected/3ff8b616-423d-4b7f-9fb8-2ac91fbef324-kube-api-access-k9jqw\") pod \"3ff8b616-423d-4b7f-9fb8-2ac91fbef324\" (UID: \"3ff8b616-423d-4b7f-9fb8-2ac91fbef324\") " Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.364932 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ff8b616-423d-4b7f-9fb8-2ac91fbef324-utilities" (OuterVolumeSpecName: "utilities") pod "3ff8b616-423d-4b7f-9fb8-2ac91fbef324" (UID: "3ff8b616-423d-4b7f-9fb8-2ac91fbef324"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.365260 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff8b616-423d-4b7f-9fb8-2ac91fbef324-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.369499 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ff8b616-423d-4b7f-9fb8-2ac91fbef324-kube-api-access-k9jqw" (OuterVolumeSpecName: "kube-api-access-k9jqw") pod "3ff8b616-423d-4b7f-9fb8-2ac91fbef324" (UID: "3ff8b616-423d-4b7f-9fb8-2ac91fbef324"). InnerVolumeSpecName "kube-api-access-k9jqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.386065 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ff8b616-423d-4b7f-9fb8-2ac91fbef324-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ff8b616-423d-4b7f-9fb8-2ac91fbef324" (UID: "3ff8b616-423d-4b7f-9fb8-2ac91fbef324"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.465784 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff8b616-423d-4b7f-9fb8-2ac91fbef324-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.465817 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9jqw\" (UniqueName: \"kubernetes.io/projected/3ff8b616-423d-4b7f-9fb8-2ac91fbef324-kube-api-access-k9jqw\") on node \"crc\" DevicePath \"\"" Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.933031 4953 generic.go:334] "Generic (PLEG): container finished" podID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" containerID="de54988d5da9330b068b9f3b4da467e55278eadde549dd7c5b5c124b0f775489" exitCode=0 Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.933097 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zmzrz" Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.933089 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmzrz" event={"ID":"3ff8b616-423d-4b7f-9fb8-2ac91fbef324","Type":"ContainerDied","Data":"de54988d5da9330b068b9f3b4da467e55278eadde549dd7c5b5c124b0f775489"} Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.933156 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zmzrz" event={"ID":"3ff8b616-423d-4b7f-9fb8-2ac91fbef324","Type":"ContainerDied","Data":"4334880ccb543b3c5d4d44e45f457acd59d715cd244f01c7651b8068f79e8086"} Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.933176 4953 scope.go:117] "RemoveContainer" containerID="de54988d5da9330b068b9f3b4da467e55278eadde549dd7c5b5c124b0f775489" Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.950333 4953 scope.go:117] "RemoveContainer" containerID="799eb1538a01d71130a16951970a5f9954941bf9a9edc1356d349ac896e0d2b5" Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.965275 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmzrz"] Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.968022 4953 scope.go:117] "RemoveContainer" containerID="2aa11f794940a50d7d938e869001f064f99fdc6753e52aa912a65a39316c0054" Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.972505 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zmzrz"] Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.988457 4953 scope.go:117] "RemoveContainer" containerID="de54988d5da9330b068b9f3b4da467e55278eadde549dd7c5b5c124b0f775489" Feb 23 00:16:11 crc kubenswrapper[4953]: E0223 00:16:11.988935 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de54988d5da9330b068b9f3b4da467e55278eadde549dd7c5b5c124b0f775489\": container with ID starting with de54988d5da9330b068b9f3b4da467e55278eadde549dd7c5b5c124b0f775489 not found: ID does not exist" containerID="de54988d5da9330b068b9f3b4da467e55278eadde549dd7c5b5c124b0f775489" Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.988981 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de54988d5da9330b068b9f3b4da467e55278eadde549dd7c5b5c124b0f775489"} err="failed to get container status \"de54988d5da9330b068b9f3b4da467e55278eadde549dd7c5b5c124b0f775489\": rpc error: code = NotFound desc = could not find container \"de54988d5da9330b068b9f3b4da467e55278eadde549dd7c5b5c124b0f775489\": container with ID starting with de54988d5da9330b068b9f3b4da467e55278eadde549dd7c5b5c124b0f775489 not found: ID does not exist" Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.989014 4953 scope.go:117] "RemoveContainer" containerID="799eb1538a01d71130a16951970a5f9954941bf9a9edc1356d349ac896e0d2b5" Feb 23 00:16:11 crc kubenswrapper[4953]: E0223 00:16:11.989441 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"799eb1538a01d71130a16951970a5f9954941bf9a9edc1356d349ac896e0d2b5\": container with ID starting with 799eb1538a01d71130a16951970a5f9954941bf9a9edc1356d349ac896e0d2b5 not found: ID does not exist" containerID="799eb1538a01d71130a16951970a5f9954941bf9a9edc1356d349ac896e0d2b5" Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.989480 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"799eb1538a01d71130a16951970a5f9954941bf9a9edc1356d349ac896e0d2b5"} err="failed to get container status \"799eb1538a01d71130a16951970a5f9954941bf9a9edc1356d349ac896e0d2b5\": rpc error: code = NotFound desc = could not find container \"799eb1538a01d71130a16951970a5f9954941bf9a9edc1356d349ac896e0d2b5\": container with ID starting with 799eb1538a01d71130a16951970a5f9954941bf9a9edc1356d349ac896e0d2b5 not found: ID does not exist" Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.989508 4953 scope.go:117] "RemoveContainer" containerID="2aa11f794940a50d7d938e869001f064f99fdc6753e52aa912a65a39316c0054" Feb 23 00:16:11 crc kubenswrapper[4953]: E0223 00:16:11.989826 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aa11f794940a50d7d938e869001f064f99fdc6753e52aa912a65a39316c0054\": container with ID starting with 2aa11f794940a50d7d938e869001f064f99fdc6753e52aa912a65a39316c0054 not found: ID does not exist" containerID="2aa11f794940a50d7d938e869001f064f99fdc6753e52aa912a65a39316c0054" Feb 23 00:16:11 crc kubenswrapper[4953]: I0223 00:16:11.989878 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aa11f794940a50d7d938e869001f064f99fdc6753e52aa912a65a39316c0054"} err="failed to get container status \"2aa11f794940a50d7d938e869001f064f99fdc6753e52aa912a65a39316c0054\": rpc error: code = NotFound desc = could not find container \"2aa11f794940a50d7d938e869001f064f99fdc6753e52aa912a65a39316c0054\": container with ID starting with 2aa11f794940a50d7d938e869001f064f99fdc6753e52aa912a65a39316c0054 not found: ID does not exist" Feb 23 00:16:13 crc kubenswrapper[4953]: I0223 00:16:13.331978 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" path="/var/lib/kubelet/pods/3ff8b616-423d-4b7f-9fb8-2ac91fbef324/volumes" Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.547775 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh"] Feb 23 00:16:14 crc kubenswrapper[4953]: E0223 00:16:14.548310 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" containerName="extract-content" Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.548323 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" containerName="extract-content" Feb 23 00:16:14 crc kubenswrapper[4953]: E0223 00:16:14.548339 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" containerName="extract-utilities" Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.548346 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" containerName="extract-utilities" Feb 23 00:16:14 crc kubenswrapper[4953]: E0223 00:16:14.548359 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" containerName="registry-server" Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.548365 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" containerName="registry-server" Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.548451 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ff8b616-423d-4b7f-9fb8-2ac91fbef324" containerName="registry-server" Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.549175 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh" Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.551102 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.558195 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh"] Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.602995 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znczs\" (UniqueName: \"kubernetes.io/projected/64bb377e-103a-40e7-a37f-de41160c7a61-kube-api-access-znczs\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh\" (UID: \"64bb377e-103a-40e7-a37f-de41160c7a61\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh" Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.603034 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64bb377e-103a-40e7-a37f-de41160c7a61-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh\" (UID: \"64bb377e-103a-40e7-a37f-de41160c7a61\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh" Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.603073 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64bb377e-103a-40e7-a37f-de41160c7a61-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh\" (UID: \"64bb377e-103a-40e7-a37f-de41160c7a61\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh" Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.699984 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.700049 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.703877 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64bb377e-103a-40e7-a37f-de41160c7a61-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh\" (UID: \"64bb377e-103a-40e7-a37f-de41160c7a61\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh" Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.704065 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znczs\" (UniqueName: \"kubernetes.io/projected/64bb377e-103a-40e7-a37f-de41160c7a61-kube-api-access-znczs\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh\" (UID: \"64bb377e-103a-40e7-a37f-de41160c7a61\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh" Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.704108 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64bb377e-103a-40e7-a37f-de41160c7a61-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh\" (UID: \"64bb377e-103a-40e7-a37f-de41160c7a61\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh" Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.704351 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64bb377e-103a-40e7-a37f-de41160c7a61-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh\" (UID: \"64bb377e-103a-40e7-a37f-de41160c7a61\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh" Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.704885 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64bb377e-103a-40e7-a37f-de41160c7a61-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh\" (UID: \"64bb377e-103a-40e7-a37f-de41160c7a61\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh" Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.722821 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znczs\" (UniqueName: \"kubernetes.io/projected/64bb377e-103a-40e7-a37f-de41160c7a61-kube-api-access-znczs\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh\" (UID: \"64bb377e-103a-40e7-a37f-de41160c7a61\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh" Feb 23 00:16:14 crc kubenswrapper[4953]: I0223 00:16:14.865894 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh" Feb 23 00:16:15 crc kubenswrapper[4953]: I0223 00:16:15.069732 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh"] Feb 23 00:16:15 crc kubenswrapper[4953]: W0223 00:16:15.084648 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64bb377e_103a_40e7_a37f_de41160c7a61.slice/crio-53f782dc5c5533d4d6eec706a3b4caac191bf8aa38535ee9b197f456eb53373c WatchSource:0}: Error finding container 53f782dc5c5533d4d6eec706a3b4caac191bf8aa38535ee9b197f456eb53373c: Status 404 returned error can't find the container with id 53f782dc5c5533d4d6eec706a3b4caac191bf8aa38535ee9b197f456eb53373c Feb 23 00:16:15 crc kubenswrapper[4953]: I0223 00:16:15.955930 4953 generic.go:334] "Generic (PLEG): container finished" podID="64bb377e-103a-40e7-a37f-de41160c7a61" containerID="88149be714498fb2e674edd6f6cf93f53831f8757c2c430b3f8ee96da5fb9603" exitCode=0 Feb 23 00:16:15 crc kubenswrapper[4953]: I0223 00:16:15.955975 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh" event={"ID":"64bb377e-103a-40e7-a37f-de41160c7a61","Type":"ContainerDied","Data":"88149be714498fb2e674edd6f6cf93f53831f8757c2c430b3f8ee96da5fb9603"} Feb 23 00:16:15 crc kubenswrapper[4953]: I0223 00:16:15.956002 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh" event={"ID":"64bb377e-103a-40e7-a37f-de41160c7a61","Type":"ContainerStarted","Data":"53f782dc5c5533d4d6eec706a3b4caac191bf8aa38535ee9b197f456eb53373c"} Feb 23 00:16:15 crc kubenswrapper[4953]: I0223 00:16:15.958202 4953 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 00:16:17 crc kubenswrapper[4953]: I0223 00:16:17.966411 4953 generic.go:334] "Generic (PLEG): container finished" podID="64bb377e-103a-40e7-a37f-de41160c7a61" containerID="55d00adbfa7b676265804884d11a1614aee2583ea03572e7346a5c3aa86d55d7" exitCode=0 Feb 23 00:16:17 crc kubenswrapper[4953]: I0223 00:16:17.966466 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh" event={"ID":"64bb377e-103a-40e7-a37f-de41160c7a61","Type":"ContainerDied","Data":"55d00adbfa7b676265804884d11a1614aee2583ea03572e7346a5c3aa86d55d7"} Feb 23 00:16:18 crc kubenswrapper[4953]: I0223 00:16:18.974172 4953 generic.go:334] "Generic (PLEG): container finished" podID="64bb377e-103a-40e7-a37f-de41160c7a61" containerID="09ad7aec954c9d8aa9662d50e14a3f170bb1c84a072a449e0a799b2b8eacfb78" exitCode=0 Feb 23 00:16:18 crc kubenswrapper[4953]: I0223 00:16:18.974215 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh" event={"ID":"64bb377e-103a-40e7-a37f-de41160c7a61","Type":"ContainerDied","Data":"09ad7aec954c9d8aa9662d50e14a3f170bb1c84a072a449e0a799b2b8eacfb78"} Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.239238 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.371990 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znczs\" (UniqueName: \"kubernetes.io/projected/64bb377e-103a-40e7-a37f-de41160c7a61-kube-api-access-znczs\") pod \"64bb377e-103a-40e7-a37f-de41160c7a61\" (UID: \"64bb377e-103a-40e7-a37f-de41160c7a61\") " Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.372068 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64bb377e-103a-40e7-a37f-de41160c7a61-bundle\") pod \"64bb377e-103a-40e7-a37f-de41160c7a61\" (UID: \"64bb377e-103a-40e7-a37f-de41160c7a61\") " Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.372180 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64bb377e-103a-40e7-a37f-de41160c7a61-util\") pod \"64bb377e-103a-40e7-a37f-de41160c7a61\" (UID: \"64bb377e-103a-40e7-a37f-de41160c7a61\") " Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.374266 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64bb377e-103a-40e7-a37f-de41160c7a61-bundle" (OuterVolumeSpecName: "bundle") pod "64bb377e-103a-40e7-a37f-de41160c7a61" (UID: "64bb377e-103a-40e7-a37f-de41160c7a61"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.380510 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64bb377e-103a-40e7-a37f-de41160c7a61-kube-api-access-znczs" (OuterVolumeSpecName: "kube-api-access-znczs") pod "64bb377e-103a-40e7-a37f-de41160c7a61" (UID: "64bb377e-103a-40e7-a37f-de41160c7a61"). InnerVolumeSpecName "kube-api-access-znczs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.474443 4953 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64bb377e-103a-40e7-a37f-de41160c7a61-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.474488 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znczs\" (UniqueName: \"kubernetes.io/projected/64bb377e-103a-40e7-a37f-de41160c7a61-kube-api-access-znczs\") on node \"crc\" DevicePath \"\"" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.628559 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64bb377e-103a-40e7-a37f-de41160c7a61-util" (OuterVolumeSpecName: "util") pod "64bb377e-103a-40e7-a37f-de41160c7a61" (UID: "64bb377e-103a-40e7-a37f-de41160c7a61"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.677348 4953 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64bb377e-103a-40e7-a37f-de41160c7a61-util\") on node \"crc\" DevicePath \"\"" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.745589 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5"] Feb 23 00:16:20 crc kubenswrapper[4953]: E0223 00:16:20.745832 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64bb377e-103a-40e7-a37f-de41160c7a61" containerName="pull" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.745846 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="64bb377e-103a-40e7-a37f-de41160c7a61" containerName="pull" Feb 23 00:16:20 crc kubenswrapper[4953]: E0223 00:16:20.745861 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64bb377e-103a-40e7-a37f-de41160c7a61" containerName="extract" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.745870 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="64bb377e-103a-40e7-a37f-de41160c7a61" containerName="extract" Feb 23 00:16:20 crc kubenswrapper[4953]: E0223 00:16:20.745887 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64bb377e-103a-40e7-a37f-de41160c7a61" containerName="util" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.745895 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="64bb377e-103a-40e7-a37f-de41160c7a61" containerName="util" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.746016 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="64bb377e-103a-40e7-a37f-de41160c7a61" containerName="extract" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.746924 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.760639 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5"] Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.882434 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61b8140f-0a3b-401c-ac40-92def4a3f617-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5\" (UID: \"61b8140f-0a3b-401c-ac40-92def4a3f617\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.882635 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61b8140f-0a3b-401c-ac40-92def4a3f617-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5\" (UID: \"61b8140f-0a3b-401c-ac40-92def4a3f617\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.882681 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzpkq\" (UniqueName: \"kubernetes.io/projected/61b8140f-0a3b-401c-ac40-92def4a3f617-kube-api-access-vzpkq\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5\" (UID: \"61b8140f-0a3b-401c-ac40-92def4a3f617\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.983906 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61b8140f-0a3b-401c-ac40-92def4a3f617-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5\" (UID: \"61b8140f-0a3b-401c-ac40-92def4a3f617\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.984036 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61b8140f-0a3b-401c-ac40-92def4a3f617-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5\" (UID: \"61b8140f-0a3b-401c-ac40-92def4a3f617\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.984089 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzpkq\" (UniqueName: \"kubernetes.io/projected/61b8140f-0a3b-401c-ac40-92def4a3f617-kube-api-access-vzpkq\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5\" (UID: \"61b8140f-0a3b-401c-ac40-92def4a3f617\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.985199 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61b8140f-0a3b-401c-ac40-92def4a3f617-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5\" (UID: \"61b8140f-0a3b-401c-ac40-92def4a3f617\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.985321 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61b8140f-0a3b-401c-ac40-92def4a3f617-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5\" (UID: \"61b8140f-0a3b-401c-ac40-92def4a3f617\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.992226 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh" event={"ID":"64bb377e-103a-40e7-a37f-de41160c7a61","Type":"ContainerDied","Data":"53f782dc5c5533d4d6eec706a3b4caac191bf8aa38535ee9b197f456eb53373c"} Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.992277 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53f782dc5c5533d4d6eec706a3b4caac191bf8aa38535ee9b197f456eb53373c" Feb 23 00:16:20 crc kubenswrapper[4953]: I0223 00:16:20.992314 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh" Feb 23 00:16:21 crc kubenswrapper[4953]: I0223 00:16:21.003264 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzpkq\" (UniqueName: \"kubernetes.io/projected/61b8140f-0a3b-401c-ac40-92def4a3f617-kube-api-access-vzpkq\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5\" (UID: \"61b8140f-0a3b-401c-ac40-92def4a3f617\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5" Feb 23 00:16:21 crc kubenswrapper[4953]: I0223 00:16:21.069667 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5" Feb 23 00:16:21 crc kubenswrapper[4953]: I0223 00:16:21.337000 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5"] Feb 23 00:16:21 crc kubenswrapper[4953]: W0223 00:16:21.346536 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61b8140f_0a3b_401c_ac40_92def4a3f617.slice/crio-c9266e61809e677f41b69b95a005becf51e7060f9c9a5e4b0d63513c48f8926f WatchSource:0}: Error finding container c9266e61809e677f41b69b95a005becf51e7060f9c9a5e4b0d63513c48f8926f: Status 404 returned error can't find the container with id c9266e61809e677f41b69b95a005becf51e7060f9c9a5e4b0d63513c48f8926f Feb 23 00:16:22 crc kubenswrapper[4953]: I0223 00:16:22.003156 4953 generic.go:334] "Generic (PLEG): container finished" podID="61b8140f-0a3b-401c-ac40-92def4a3f617" containerID="86c5ba56a695ee0929ef465e233635d4037e1102320d14728a8f3f86248d693d" exitCode=0 Feb 23 00:16:22 crc kubenswrapper[4953]: I0223 00:16:22.003235 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5" event={"ID":"61b8140f-0a3b-401c-ac40-92def4a3f617","Type":"ContainerDied","Data":"86c5ba56a695ee0929ef465e233635d4037e1102320d14728a8f3f86248d693d"} Feb 23 00:16:22 crc kubenswrapper[4953]: I0223 00:16:22.003327 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5" event={"ID":"61b8140f-0a3b-401c-ac40-92def4a3f617","Type":"ContainerStarted","Data":"c9266e61809e677f41b69b95a005becf51e7060f9c9a5e4b0d63513c48f8926f"} Feb 23 00:16:24 crc kubenswrapper[4953]: I0223 00:16:24.016133 4953 generic.go:334] "Generic (PLEG): container finished" podID="61b8140f-0a3b-401c-ac40-92def4a3f617" containerID="7652b8bea19840535d5e0e67eb44f51984c040683fea5a008b30d90c8f927429" exitCode=0 Feb 23 00:16:24 crc kubenswrapper[4953]: I0223 00:16:24.016236 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5" event={"ID":"61b8140f-0a3b-401c-ac40-92def4a3f617","Type":"ContainerDied","Data":"7652b8bea19840535d5e0e67eb44f51984c040683fea5a008b30d90c8f927429"} Feb 23 00:16:25 crc kubenswrapper[4953]: I0223 00:16:25.023859 4953 generic.go:334] "Generic (PLEG): container finished" podID="61b8140f-0a3b-401c-ac40-92def4a3f617" containerID="34cf07da8cfe743bf1cf6638f9b0f0860a17988a5fd06b5d2c22e2d2fc72da61" exitCode=0 Feb 23 00:16:25 crc kubenswrapper[4953]: I0223 00:16:25.023898 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5" event={"ID":"61b8140f-0a3b-401c-ac40-92def4a3f617","Type":"ContainerDied","Data":"34cf07da8cfe743bf1cf6638f9b0f0860a17988a5fd06b5d2c22e2d2fc72da61"} Feb 23 00:16:25 crc kubenswrapper[4953]: I0223 00:16:25.959453 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h"] Feb 23 00:16:25 crc kubenswrapper[4953]: I0223 00:16:25.960420 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h" Feb 23 00:16:25 crc kubenswrapper[4953]: I0223 00:16:25.972512 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h"] Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.152267 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9b5a2242-d62e-40fd-be0d-80648962c8d8-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h\" (UID: \"9b5a2242-d62e-40fd-be0d-80648962c8d8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h" Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.152609 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjkxt\" (UniqueName: \"kubernetes.io/projected/9b5a2242-d62e-40fd-be0d-80648962c8d8-kube-api-access-mjkxt\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h\" (UID: \"9b5a2242-d62e-40fd-be0d-80648962c8d8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h" Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.152661 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9b5a2242-d62e-40fd-be0d-80648962c8d8-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h\" (UID: \"9b5a2242-d62e-40fd-be0d-80648962c8d8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h" Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.253751 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9b5a2242-d62e-40fd-be0d-80648962c8d8-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h\" (UID: \"9b5a2242-d62e-40fd-be0d-80648962c8d8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h" Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.253875 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9b5a2242-d62e-40fd-be0d-80648962c8d8-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h\" (UID: \"9b5a2242-d62e-40fd-be0d-80648962c8d8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h" Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.253906 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjkxt\" (UniqueName: \"kubernetes.io/projected/9b5a2242-d62e-40fd-be0d-80648962c8d8-kube-api-access-mjkxt\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h\" (UID: \"9b5a2242-d62e-40fd-be0d-80648962c8d8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h" Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.254373 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9b5a2242-d62e-40fd-be0d-80648962c8d8-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h\" (UID: \"9b5a2242-d62e-40fd-be0d-80648962c8d8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h" Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.254434 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9b5a2242-d62e-40fd-be0d-80648962c8d8-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h\" (UID: \"9b5a2242-d62e-40fd-be0d-80648962c8d8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h" Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.281184 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjkxt\" (UniqueName: \"kubernetes.io/projected/9b5a2242-d62e-40fd-be0d-80648962c8d8-kube-api-access-mjkxt\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h\" (UID: \"9b5a2242-d62e-40fd-be0d-80648962c8d8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h" Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.357982 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5" Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.455894 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzpkq\" (UniqueName: \"kubernetes.io/projected/61b8140f-0a3b-401c-ac40-92def4a3f617-kube-api-access-vzpkq\") pod \"61b8140f-0a3b-401c-ac40-92def4a3f617\" (UID: \"61b8140f-0a3b-401c-ac40-92def4a3f617\") " Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.456033 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61b8140f-0a3b-401c-ac40-92def4a3f617-util\") pod \"61b8140f-0a3b-401c-ac40-92def4a3f617\" (UID: \"61b8140f-0a3b-401c-ac40-92def4a3f617\") " Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.456068 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61b8140f-0a3b-401c-ac40-92def4a3f617-bundle\") pod \"61b8140f-0a3b-401c-ac40-92def4a3f617\" (UID: \"61b8140f-0a3b-401c-ac40-92def4a3f617\") " Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.458261 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61b8140f-0a3b-401c-ac40-92def4a3f617-bundle" (OuterVolumeSpecName: "bundle") pod "61b8140f-0a3b-401c-ac40-92def4a3f617" (UID: "61b8140f-0a3b-401c-ac40-92def4a3f617"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.463023 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61b8140f-0a3b-401c-ac40-92def4a3f617-kube-api-access-vzpkq" (OuterVolumeSpecName: "kube-api-access-vzpkq") pod "61b8140f-0a3b-401c-ac40-92def4a3f617" (UID: "61b8140f-0a3b-401c-ac40-92def4a3f617"). InnerVolumeSpecName "kube-api-access-vzpkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.482353 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61b8140f-0a3b-401c-ac40-92def4a3f617-util" (OuterVolumeSpecName: "util") pod "61b8140f-0a3b-401c-ac40-92def4a3f617" (UID: "61b8140f-0a3b-401c-ac40-92def4a3f617"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.557414 4953 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61b8140f-0a3b-401c-ac40-92def4a3f617-util\") on node \"crc\" DevicePath \"\"" Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.557445 4953 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61b8140f-0a3b-401c-ac40-92def4a3f617-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.557456 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzpkq\" (UniqueName: \"kubernetes.io/projected/61b8140f-0a3b-401c-ac40-92def4a3f617-kube-api-access-vzpkq\") on node \"crc\" DevicePath \"\"" Feb 23 00:16:26 crc kubenswrapper[4953]: I0223 00:16:26.573970 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h" Feb 23 00:16:27 crc kubenswrapper[4953]: I0223 00:16:27.036010 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5" event={"ID":"61b8140f-0a3b-401c-ac40-92def4a3f617","Type":"ContainerDied","Data":"c9266e61809e677f41b69b95a005becf51e7060f9c9a5e4b0d63513c48f8926f"} Feb 23 00:16:27 crc kubenswrapper[4953]: I0223 00:16:27.036044 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9266e61809e677f41b69b95a005becf51e7060f9c9a5e4b0d63513c48f8926f" Feb 23 00:16:27 crc kubenswrapper[4953]: I0223 00:16:27.036101 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5" Feb 23 00:16:27 crc kubenswrapper[4953]: I0223 00:16:27.105675 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h"] Feb 23 00:16:28 crc kubenswrapper[4953]: I0223 00:16:28.042193 4953 generic.go:334] "Generic (PLEG): container finished" podID="9b5a2242-d62e-40fd-be0d-80648962c8d8" containerID="e9e59fa888777eb37b165ed5d82a4243b2200bc81b42dd7d27d372f53399304c" exitCode=0 Feb 23 00:16:28 crc kubenswrapper[4953]: I0223 00:16:28.043522 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h" event={"ID":"9b5a2242-d62e-40fd-be0d-80648962c8d8","Type":"ContainerDied","Data":"e9e59fa888777eb37b165ed5d82a4243b2200bc81b42dd7d27d372f53399304c"} Feb 23 00:16:28 crc kubenswrapper[4953]: I0223 00:16:28.043628 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h" event={"ID":"9b5a2242-d62e-40fd-be0d-80648962c8d8","Type":"ContainerStarted","Data":"c54ac8800a1870ca2e6c44966cc0a14d12c68f1359af6e5cfdf6517710802758"} Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.647102 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-p9mgz"] Feb 23 00:16:30 crc kubenswrapper[4953]: E0223 00:16:30.647678 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61b8140f-0a3b-401c-ac40-92def4a3f617" containerName="util" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.647694 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b8140f-0a3b-401c-ac40-92def4a3f617" containerName="util" Feb 23 00:16:30 crc kubenswrapper[4953]: E0223 00:16:30.647703 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61b8140f-0a3b-401c-ac40-92def4a3f617" containerName="pull" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.647711 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b8140f-0a3b-401c-ac40-92def4a3f617" containerName="pull" Feb 23 00:16:30 crc kubenswrapper[4953]: E0223 00:16:30.647729 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61b8140f-0a3b-401c-ac40-92def4a3f617" containerName="extract" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.647738 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b8140f-0a3b-401c-ac40-92def4a3f617" containerName="extract" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.647881 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="61b8140f-0a3b-401c-ac40-92def4a3f617" containerName="extract" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.648341 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-p9mgz" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.650235 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.653110 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.653439 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-wjgph" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.673197 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-p9mgz"] Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.722491 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc"] Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.723218 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc" Feb 23 00:16:30 crc kubenswrapper[4953]: W0223 00:16:30.725435 4953 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert": failed to list *v1.Secret: secrets "obo-prometheus-operator-admission-webhook-service-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators": no relationship found between node 'crc' and this object Feb 23 00:16:30 crc kubenswrapper[4953]: W0223 00:16:30.725453 4953 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-cj9nj": failed to list *v1.Secret: secrets "obo-prometheus-operator-admission-webhook-dockercfg-cj9nj" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators": no relationship found between node 'crc' and this object Feb 23 00:16:30 crc kubenswrapper[4953]: E0223 00:16:30.725475 4953 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"obo-prometheus-operator-admission-webhook-service-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 23 00:16:30 crc kubenswrapper[4953]: E0223 00:16:30.725493 4953 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-cj9nj\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"obo-prometheus-operator-admission-webhook-dockercfg-cj9nj\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.736095 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-bll92"] Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.736757 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-bll92" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.739558 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc"] Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.779940 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-bll92"] Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.837568 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/063a0b62-c9b0-4730-9485-ecd85781d17a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-758bb7fb84-bll92\" (UID: \"063a0b62-c9b0-4730-9485-ecd85781d17a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-bll92" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.837652 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae05751c-98c6-4129-8206-148d9553e542-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc\" (UID: \"ae05751c-98c6-4129-8206-148d9553e542\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.837707 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/063a0b62-c9b0-4730-9485-ecd85781d17a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-758bb7fb84-bll92\" (UID: \"063a0b62-c9b0-4730-9485-ecd85781d17a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-bll92" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.837781 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae05751c-98c6-4129-8206-148d9553e542-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc\" (UID: \"ae05751c-98c6-4129-8206-148d9553e542\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.837816 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dddw\" (UniqueName: \"kubernetes.io/projected/bde89fe3-f774-4c9b-924d-fccad8941098-kube-api-access-4dddw\") pod \"obo-prometheus-operator-68bc856cb9-p9mgz\" (UID: \"bde89fe3-f774-4c9b-924d-fccad8941098\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-p9mgz" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.851078 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6gqst"] Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.851876 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-6gqst" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.855831 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-4h7pk" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.856019 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.866535 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6gqst"] Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.938678 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/063a0b62-c9b0-4730-9485-ecd85781d17a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-758bb7fb84-bll92\" (UID: \"063a0b62-c9b0-4730-9485-ecd85781d17a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-bll92" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.938734 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae05751c-98c6-4129-8206-148d9553e542-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc\" (UID: \"ae05751c-98c6-4129-8206-148d9553e542\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.938770 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/063a0b62-c9b0-4730-9485-ecd85781d17a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-758bb7fb84-bll92\" (UID: \"063a0b62-c9b0-4730-9485-ecd85781d17a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-bll92" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.938809 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7bf9b63e-1b5b-4063-a9e9-3619753fc50e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6gqst\" (UID: \"7bf9b63e-1b5b-4063-a9e9-3619753fc50e\") " pod="openshift-operators/observability-operator-59bdc8b94-6gqst" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.938838 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd294\" (UniqueName: \"kubernetes.io/projected/7bf9b63e-1b5b-4063-a9e9-3619753fc50e-kube-api-access-pd294\") pod \"observability-operator-59bdc8b94-6gqst\" (UID: \"7bf9b63e-1b5b-4063-a9e9-3619753fc50e\") " pod="openshift-operators/observability-operator-59bdc8b94-6gqst" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.938887 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae05751c-98c6-4129-8206-148d9553e542-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc\" (UID: \"ae05751c-98c6-4129-8206-148d9553e542\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.938923 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dddw\" (UniqueName: \"kubernetes.io/projected/bde89fe3-f774-4c9b-924d-fccad8941098-kube-api-access-4dddw\") pod \"obo-prometheus-operator-68bc856cb9-p9mgz\" (UID: \"bde89fe3-f774-4c9b-924d-fccad8941098\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-p9mgz" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.960280 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dddw\" (UniqueName: \"kubernetes.io/projected/bde89fe3-f774-4c9b-924d-fccad8941098-kube-api-access-4dddw\") pod \"obo-prometheus-operator-68bc856cb9-p9mgz\" (UID: \"bde89fe3-f774-4c9b-924d-fccad8941098\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-p9mgz" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.960478 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-x7skn"] Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.961137 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-x7skn" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.963926 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-p9mgz" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.970665 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-l8zqn" Feb 23 00:16:30 crc kubenswrapper[4953]: I0223 00:16:30.976463 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-x7skn"] Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.039916 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7bf9b63e-1b5b-4063-a9e9-3619753fc50e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6gqst\" (UID: \"7bf9b63e-1b5b-4063-a9e9-3619753fc50e\") " pod="openshift-operators/observability-operator-59bdc8b94-6gqst" Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.039990 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd294\" (UniqueName: \"kubernetes.io/projected/7bf9b63e-1b5b-4063-a9e9-3619753fc50e-kube-api-access-pd294\") pod \"observability-operator-59bdc8b94-6gqst\" (UID: \"7bf9b63e-1b5b-4063-a9e9-3619753fc50e\") " pod="openshift-operators/observability-operator-59bdc8b94-6gqst" Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.057158 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd294\" (UniqueName: \"kubernetes.io/projected/7bf9b63e-1b5b-4063-a9e9-3619753fc50e-kube-api-access-pd294\") pod \"observability-operator-59bdc8b94-6gqst\" (UID: \"7bf9b63e-1b5b-4063-a9e9-3619753fc50e\") " pod="openshift-operators/observability-operator-59bdc8b94-6gqst" Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.059072 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7bf9b63e-1b5b-4063-a9e9-3619753fc50e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6gqst\" (UID: \"7bf9b63e-1b5b-4063-a9e9-3619753fc50e\") " pod="openshift-operators/observability-operator-59bdc8b94-6gqst" Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.141537 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9wnr\" (UniqueName: \"kubernetes.io/projected/6166ab70-4311-4eeb-a162-a48aa002f5f1-kube-api-access-c9wnr\") pod \"perses-operator-5bf474d74f-x7skn\" (UID: \"6166ab70-4311-4eeb-a162-a48aa002f5f1\") " pod="openshift-operators/perses-operator-5bf474d74f-x7skn" Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.141843 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6166ab70-4311-4eeb-a162-a48aa002f5f1-openshift-service-ca\") pod \"perses-operator-5bf474d74f-x7skn\" (UID: \"6166ab70-4311-4eeb-a162-a48aa002f5f1\") " pod="openshift-operators/perses-operator-5bf474d74f-x7skn" Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.193738 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-6gqst" Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.242946 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9wnr\" (UniqueName: \"kubernetes.io/projected/6166ab70-4311-4eeb-a162-a48aa002f5f1-kube-api-access-c9wnr\") pod \"perses-operator-5bf474d74f-x7skn\" (UID: \"6166ab70-4311-4eeb-a162-a48aa002f5f1\") " pod="openshift-operators/perses-operator-5bf474d74f-x7skn" Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.243068 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6166ab70-4311-4eeb-a162-a48aa002f5f1-openshift-service-ca\") pod \"perses-operator-5bf474d74f-x7skn\" (UID: \"6166ab70-4311-4eeb-a162-a48aa002f5f1\") " pod="openshift-operators/perses-operator-5bf474d74f-x7skn" Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.244210 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6166ab70-4311-4eeb-a162-a48aa002f5f1-openshift-service-ca\") pod \"perses-operator-5bf474d74f-x7skn\" (UID: \"6166ab70-4311-4eeb-a162-a48aa002f5f1\") " pod="openshift-operators/perses-operator-5bf474d74f-x7skn" Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.272121 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9wnr\" (UniqueName: \"kubernetes.io/projected/6166ab70-4311-4eeb-a162-a48aa002f5f1-kube-api-access-c9wnr\") pod \"perses-operator-5bf474d74f-x7skn\" (UID: \"6166ab70-4311-4eeb-a162-a48aa002f5f1\") " pod="openshift-operators/perses-operator-5bf474d74f-x7skn" Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.293455 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-x7skn" Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.564603 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.573590 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae05751c-98c6-4129-8206-148d9553e542-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc\" (UID: \"ae05751c-98c6-4129-8206-148d9553e542\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc" Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.576258 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae05751c-98c6-4129-8206-148d9553e542-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc\" (UID: \"ae05751c-98c6-4129-8206-148d9553e542\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc" Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.578509 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/063a0b62-c9b0-4730-9485-ecd85781d17a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-758bb7fb84-bll92\" (UID: \"063a0b62-c9b0-4730-9485-ecd85781d17a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-bll92" Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.581982 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/063a0b62-c9b0-4730-9485-ecd85781d17a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-758bb7fb84-bll92\" (UID: \"063a0b62-c9b0-4730-9485-ecd85781d17a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-bll92" Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.627397 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-cj9nj" Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.637900 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc" Feb 23 00:16:31 crc kubenswrapper[4953]: I0223 00:16:31.656447 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-bll92" Feb 23 00:16:33 crc kubenswrapper[4953]: I0223 00:16:33.452092 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6gqst"] Feb 23 00:16:33 crc kubenswrapper[4953]: I0223 00:16:33.553408 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-x7skn"] Feb 23 00:16:33 crc kubenswrapper[4953]: I0223 00:16:33.725109 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc"] Feb 23 00:16:33 crc kubenswrapper[4953]: I0223 00:16:33.729534 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-p9mgz"] Feb 23 00:16:33 crc kubenswrapper[4953]: W0223 00:16:33.739911 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbde89fe3_f774_4c9b_924d_fccad8941098.slice/crio-91362fa4b9759a4665c2fca2c45ba5e34e76c48534f0f0044903e6b3c7ba6928 WatchSource:0}: Error finding container 91362fa4b9759a4665c2fca2c45ba5e34e76c48534f0f0044903e6b3c7ba6928: Status 404 returned error can't find the container with id 91362fa4b9759a4665c2fca2c45ba5e34e76c48534f0f0044903e6b3c7ba6928 Feb 23 00:16:33 crc kubenswrapper[4953]: I0223 00:16:33.927734 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-bll92"] Feb 23 00:16:34 crc kubenswrapper[4953]: I0223 00:16:34.176261 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-p9mgz" event={"ID":"bde89fe3-f774-4c9b-924d-fccad8941098","Type":"ContainerStarted","Data":"91362fa4b9759a4665c2fca2c45ba5e34e76c48534f0f0044903e6b3c7ba6928"} Feb 23 00:16:34 crc kubenswrapper[4953]: I0223 00:16:34.177217 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-x7skn" event={"ID":"6166ab70-4311-4eeb-a162-a48aa002f5f1","Type":"ContainerStarted","Data":"9b3575fcde1b6f88b58018811f34c2edd1f32f4c3337b43617ab97ba50236620"} Feb 23 00:16:34 crc kubenswrapper[4953]: I0223 00:16:34.178399 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc" event={"ID":"ae05751c-98c6-4129-8206-148d9553e542","Type":"ContainerStarted","Data":"31dc821c99f8c9b261a9ce25efc772b0318af609b67a9f7b53e10a54dacb3597"} Feb 23 00:16:34 crc kubenswrapper[4953]: I0223 00:16:34.180565 4953 generic.go:334] "Generic (PLEG): container finished" podID="9b5a2242-d62e-40fd-be0d-80648962c8d8" containerID="f00f6907aae42f301a7a285136ab795cead6a60dc782d250d1b4f10448c320ca" exitCode=0 Feb 23 00:16:34 crc kubenswrapper[4953]: I0223 00:16:34.180654 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h" event={"ID":"9b5a2242-d62e-40fd-be0d-80648962c8d8","Type":"ContainerDied","Data":"f00f6907aae42f301a7a285136ab795cead6a60dc782d250d1b4f10448c320ca"} Feb 23 00:16:34 crc kubenswrapper[4953]: I0223 00:16:34.181813 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-bll92" event={"ID":"063a0b62-c9b0-4730-9485-ecd85781d17a","Type":"ContainerStarted","Data":"ae8eb3cc12359cfb930aa42f23c42a54c4fb9c1769597b3a2aadb29fa4853889"} Feb 23 00:16:34 crc kubenswrapper[4953]: I0223 00:16:34.183053 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-6gqst" event={"ID":"7bf9b63e-1b5b-4063-a9e9-3619753fc50e","Type":"ContainerStarted","Data":"53628a1e126c265f9c833bda8e8578d6bcbd8020526ba923ffc8835932bd9e1d"} Feb 23 00:16:35 crc kubenswrapper[4953]: I0223 00:16:35.191903 4953 generic.go:334] "Generic (PLEG): container finished" podID="9b5a2242-d62e-40fd-be0d-80648962c8d8" containerID="68dfb5efa351b9f3d5e7f187d7f01ec8534e5528935201aac87634a506481c1f" exitCode=0 Feb 23 00:16:35 crc kubenswrapper[4953]: I0223 00:16:35.191961 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h" event={"ID":"9b5a2242-d62e-40fd-be0d-80648962c8d8","Type":"ContainerDied","Data":"68dfb5efa351b9f3d5e7f187d7f01ec8534e5528935201aac87634a506481c1f"} Feb 23 00:16:36 crc kubenswrapper[4953]: I0223 00:16:36.476571 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h" Feb 23 00:16:36 crc kubenswrapper[4953]: I0223 00:16:36.539640 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9b5a2242-d62e-40fd-be0d-80648962c8d8-util\") pod \"9b5a2242-d62e-40fd-be0d-80648962c8d8\" (UID: \"9b5a2242-d62e-40fd-be0d-80648962c8d8\") " Feb 23 00:16:36 crc kubenswrapper[4953]: I0223 00:16:36.539745 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9b5a2242-d62e-40fd-be0d-80648962c8d8-bundle\") pod \"9b5a2242-d62e-40fd-be0d-80648962c8d8\" (UID: \"9b5a2242-d62e-40fd-be0d-80648962c8d8\") " Feb 23 00:16:36 crc kubenswrapper[4953]: I0223 00:16:36.539779 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjkxt\" (UniqueName: \"kubernetes.io/projected/9b5a2242-d62e-40fd-be0d-80648962c8d8-kube-api-access-mjkxt\") pod \"9b5a2242-d62e-40fd-be0d-80648962c8d8\" (UID: \"9b5a2242-d62e-40fd-be0d-80648962c8d8\") " Feb 23 00:16:36 crc kubenswrapper[4953]: I0223 00:16:36.541391 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b5a2242-d62e-40fd-be0d-80648962c8d8-bundle" (OuterVolumeSpecName: "bundle") pod "9b5a2242-d62e-40fd-be0d-80648962c8d8" (UID: "9b5a2242-d62e-40fd-be0d-80648962c8d8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:16:36 crc kubenswrapper[4953]: I0223 00:16:36.546678 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b5a2242-d62e-40fd-be0d-80648962c8d8-kube-api-access-mjkxt" (OuterVolumeSpecName: "kube-api-access-mjkxt") pod "9b5a2242-d62e-40fd-be0d-80648962c8d8" (UID: "9b5a2242-d62e-40fd-be0d-80648962c8d8"). InnerVolumeSpecName "kube-api-access-mjkxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:16:36 crc kubenswrapper[4953]: I0223 00:16:36.556358 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b5a2242-d62e-40fd-be0d-80648962c8d8-util" (OuterVolumeSpecName: "util") pod "9b5a2242-d62e-40fd-be0d-80648962c8d8" (UID: "9b5a2242-d62e-40fd-be0d-80648962c8d8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:16:36 crc kubenswrapper[4953]: I0223 00:16:36.641885 4953 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9b5a2242-d62e-40fd-be0d-80648962c8d8-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 00:16:36 crc kubenswrapper[4953]: I0223 00:16:36.642332 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjkxt\" (UniqueName: \"kubernetes.io/projected/9b5a2242-d62e-40fd-be0d-80648962c8d8-kube-api-access-mjkxt\") on node \"crc\" DevicePath \"\"" Feb 23 00:16:36 crc kubenswrapper[4953]: I0223 00:16:36.642343 4953 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9b5a2242-d62e-40fd-be0d-80648962c8d8-util\") on node \"crc\" DevicePath \"\"" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.234973 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h" event={"ID":"9b5a2242-d62e-40fd-be0d-80648962c8d8","Type":"ContainerDied","Data":"c54ac8800a1870ca2e6c44966cc0a14d12c68f1359af6e5cfdf6517710802758"} Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.235012 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c54ac8800a1870ca2e6c44966cc0a14d12c68f1359af6e5cfdf6517710802758" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.235053 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.516175 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-7599cd69bf-rmm5h"] Feb 23 00:16:37 crc kubenswrapper[4953]: E0223 00:16:37.516408 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b5a2242-d62e-40fd-be0d-80648962c8d8" containerName="extract" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.516420 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b5a2242-d62e-40fd-be0d-80648962c8d8" containerName="extract" Feb 23 00:16:37 crc kubenswrapper[4953]: E0223 00:16:37.516427 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b5a2242-d62e-40fd-be0d-80648962c8d8" containerName="pull" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.516434 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b5a2242-d62e-40fd-be0d-80648962c8d8" containerName="pull" Feb 23 00:16:37 crc kubenswrapper[4953]: E0223 00:16:37.516444 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b5a2242-d62e-40fd-be0d-80648962c8d8" containerName="util" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.516450 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b5a2242-d62e-40fd-be0d-80648962c8d8" containerName="util" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.516542 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b5a2242-d62e-40fd-be0d-80648962c8d8" containerName="extract" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.516891 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7599cd69bf-rmm5h" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.520603 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.521077 4953 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-4kml5" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.521132 4953 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.521564 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.545077 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7599cd69bf-rmm5h"] Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.656217 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b74f6f63-63b6-44d8-93cb-3871fadf16dd-apiservice-cert\") pod \"elastic-operator-7599cd69bf-rmm5h\" (UID: \"b74f6f63-63b6-44d8-93cb-3871fadf16dd\") " pod="service-telemetry/elastic-operator-7599cd69bf-rmm5h" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.656342 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b74f6f63-63b6-44d8-93cb-3871fadf16dd-webhook-cert\") pod \"elastic-operator-7599cd69bf-rmm5h\" (UID: \"b74f6f63-63b6-44d8-93cb-3871fadf16dd\") " pod="service-telemetry/elastic-operator-7599cd69bf-rmm5h" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.656377 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtw2q\" (UniqueName: \"kubernetes.io/projected/b74f6f63-63b6-44d8-93cb-3871fadf16dd-kube-api-access-vtw2q\") pod \"elastic-operator-7599cd69bf-rmm5h\" (UID: \"b74f6f63-63b6-44d8-93cb-3871fadf16dd\") " pod="service-telemetry/elastic-operator-7599cd69bf-rmm5h" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.758035 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtw2q\" (UniqueName: \"kubernetes.io/projected/b74f6f63-63b6-44d8-93cb-3871fadf16dd-kube-api-access-vtw2q\") pod \"elastic-operator-7599cd69bf-rmm5h\" (UID: \"b74f6f63-63b6-44d8-93cb-3871fadf16dd\") " pod="service-telemetry/elastic-operator-7599cd69bf-rmm5h" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.758135 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b74f6f63-63b6-44d8-93cb-3871fadf16dd-apiservice-cert\") pod \"elastic-operator-7599cd69bf-rmm5h\" (UID: \"b74f6f63-63b6-44d8-93cb-3871fadf16dd\") " pod="service-telemetry/elastic-operator-7599cd69bf-rmm5h" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.758187 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b74f6f63-63b6-44d8-93cb-3871fadf16dd-webhook-cert\") pod \"elastic-operator-7599cd69bf-rmm5h\" (UID: \"b74f6f63-63b6-44d8-93cb-3871fadf16dd\") " pod="service-telemetry/elastic-operator-7599cd69bf-rmm5h" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.763484 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b74f6f63-63b6-44d8-93cb-3871fadf16dd-webhook-cert\") pod \"elastic-operator-7599cd69bf-rmm5h\" (UID: \"b74f6f63-63b6-44d8-93cb-3871fadf16dd\") " pod="service-telemetry/elastic-operator-7599cd69bf-rmm5h" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.765650 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b74f6f63-63b6-44d8-93cb-3871fadf16dd-apiservice-cert\") pod \"elastic-operator-7599cd69bf-rmm5h\" (UID: \"b74f6f63-63b6-44d8-93cb-3871fadf16dd\") " pod="service-telemetry/elastic-operator-7599cd69bf-rmm5h" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.789027 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtw2q\" (UniqueName: \"kubernetes.io/projected/b74f6f63-63b6-44d8-93cb-3871fadf16dd-kube-api-access-vtw2q\") pod \"elastic-operator-7599cd69bf-rmm5h\" (UID: \"b74f6f63-63b6-44d8-93cb-3871fadf16dd\") " pod="service-telemetry/elastic-operator-7599cd69bf-rmm5h" Feb 23 00:16:37 crc kubenswrapper[4953]: I0223 00:16:37.832373 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7599cd69bf-rmm5h" Feb 23 00:16:44 crc kubenswrapper[4953]: I0223 00:16:44.629242 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7599cd69bf-rmm5h"] Feb 23 00:16:44 crc kubenswrapper[4953]: I0223 00:16:44.700426 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:16:44 crc kubenswrapper[4953]: I0223 00:16:44.700937 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:16:45 crc kubenswrapper[4953]: I0223 00:16:45.282584 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-x7skn" event={"ID":"6166ab70-4311-4eeb-a162-a48aa002f5f1","Type":"ContainerStarted","Data":"9ec56429d7e7cb376a2b38b4440460b6e906d7b34bf6516922f35bac557b2701"} Feb 23 00:16:45 crc kubenswrapper[4953]: I0223 00:16:45.283390 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-x7skn" Feb 23 00:16:45 crc kubenswrapper[4953]: I0223 00:16:45.285589 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc" event={"ID":"ae05751c-98c6-4129-8206-148d9553e542","Type":"ContainerStarted","Data":"b15eeee62d319024042f975b029dcc4726f4272562857949dc6c171c2f64017b"} Feb 23 00:16:45 crc kubenswrapper[4953]: I0223 00:16:45.287412 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-bll92" event={"ID":"063a0b62-c9b0-4730-9485-ecd85781d17a","Type":"ContainerStarted","Data":"0bba21c6ffe0fa74584a93fd19bd2851afc420a3c045e72e520b60900e24983a"} Feb 23 00:16:45 crc kubenswrapper[4953]: I0223 00:16:45.289582 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-6gqst" event={"ID":"7bf9b63e-1b5b-4063-a9e9-3619753fc50e","Type":"ContainerStarted","Data":"7b06890f27bdb81ce360978c3342a07f2b7c5e938af696f1686615cee0353684"} Feb 23 00:16:45 crc kubenswrapper[4953]: I0223 00:16:45.289890 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-6gqst" Feb 23 00:16:45 crc kubenswrapper[4953]: I0223 00:16:45.290963 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-p9mgz" event={"ID":"bde89fe3-f774-4c9b-924d-fccad8941098","Type":"ContainerStarted","Data":"c47947eb7871cb171a2586bc5c810c8eb908908196c8e94b4a70f965da50c0a8"} Feb 23 00:16:45 crc kubenswrapper[4953]: I0223 00:16:45.291931 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-6gqst" Feb 23 00:16:45 crc kubenswrapper[4953]: I0223 00:16:45.293490 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7599cd69bf-rmm5h" event={"ID":"b74f6f63-63b6-44d8-93cb-3871fadf16dd","Type":"ContainerStarted","Data":"c784a1486ff300ebae2c0d984b54127ada077cf36e1e2209018a48c242bec3ce"} Feb 23 00:16:45 crc kubenswrapper[4953]: I0223 00:16:45.308512 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-x7skn" podStartSLOduration=4.537266153 podStartE2EDuration="15.308492965s" podCreationTimestamp="2026-02-23 00:16:30 +0000 UTC" firstStartedPulling="2026-02-23 00:16:33.589324925 +0000 UTC m=+591.523166771" lastFinishedPulling="2026-02-23 00:16:44.360551737 +0000 UTC m=+602.294393583" observedRunningTime="2026-02-23 00:16:45.307553952 +0000 UTC m=+603.241395818" watchObservedRunningTime="2026-02-23 00:16:45.308492965 +0000 UTC m=+603.242334821" Feb 23 00:16:45 crc kubenswrapper[4953]: I0223 00:16:45.335015 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-bll92" podStartSLOduration=4.943986082 podStartE2EDuration="15.335001685s" podCreationTimestamp="2026-02-23 00:16:30 +0000 UTC" firstStartedPulling="2026-02-23 00:16:33.955335647 +0000 UTC m=+591.889177493" lastFinishedPulling="2026-02-23 00:16:44.34635126 +0000 UTC m=+602.280193096" observedRunningTime="2026-02-23 00:16:45.333057519 +0000 UTC m=+603.266899365" watchObservedRunningTime="2026-02-23 00:16:45.335001685 +0000 UTC m=+603.268843531" Feb 23 00:16:45 crc kubenswrapper[4953]: I0223 00:16:45.359033 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-p9mgz" podStartSLOduration=4.759451585 podStartE2EDuration="15.359018026s" podCreationTimestamp="2026-02-23 00:16:30 +0000 UTC" firstStartedPulling="2026-02-23 00:16:33.746230326 +0000 UTC m=+591.680072172" lastFinishedPulling="2026-02-23 00:16:44.345796757 +0000 UTC m=+602.279638613" observedRunningTime="2026-02-23 00:16:45.355728628 +0000 UTC m=+603.289570494" watchObservedRunningTime="2026-02-23 00:16:45.359018026 +0000 UTC m=+603.292859872" Feb 23 00:16:45 crc kubenswrapper[4953]: I0223 00:16:45.382487 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-6gqst" podStartSLOduration=4.405227883 podStartE2EDuration="15.382470544s" podCreationTimestamp="2026-02-23 00:16:30 +0000 UTC" firstStartedPulling="2026-02-23 00:16:33.513452201 +0000 UTC m=+591.447294047" lastFinishedPulling="2026-02-23 00:16:44.490694862 +0000 UTC m=+602.424536708" observedRunningTime="2026-02-23 00:16:45.379862541 +0000 UTC m=+603.313704387" watchObservedRunningTime="2026-02-23 00:16:45.382470544 +0000 UTC m=+603.316312390" Feb 23 00:16:45 crc kubenswrapper[4953]: I0223 00:16:45.406122 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc" podStartSLOduration=4.80215336 podStartE2EDuration="15.406101115s" podCreationTimestamp="2026-02-23 00:16:30 +0000 UTC" firstStartedPulling="2026-02-23 00:16:33.745955919 +0000 UTC m=+591.679797755" lastFinishedPulling="2026-02-23 00:16:44.349903664 +0000 UTC m=+602.283745510" observedRunningTime="2026-02-23 00:16:45.405267065 +0000 UTC m=+603.339108911" watchObservedRunningTime="2026-02-23 00:16:45.406101115 +0000 UTC m=+603.339942961" Feb 23 00:16:48 crc kubenswrapper[4953]: I0223 00:16:48.314021 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7599cd69bf-rmm5h" event={"ID":"b74f6f63-63b6-44d8-93cb-3871fadf16dd","Type":"ContainerStarted","Data":"8b9a81b2c897d981e7c1004c11ae26bcadb8a9deccfee1aa92adf251240e33fa"} Feb 23 00:16:48 crc kubenswrapper[4953]: I0223 00:16:48.358865 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-7599cd69bf-rmm5h" podStartSLOduration=8.013474231 podStartE2EDuration="11.358844676s" podCreationTimestamp="2026-02-23 00:16:37 +0000 UTC" firstStartedPulling="2026-02-23 00:16:44.643157886 +0000 UTC m=+602.576999732" lastFinishedPulling="2026-02-23 00:16:47.988528331 +0000 UTC m=+605.922370177" observedRunningTime="2026-02-23 00:16:48.342838315 +0000 UTC m=+606.276680171" watchObservedRunningTime="2026-02-23 00:16:48.358844676 +0000 UTC m=+606.292686522" Feb 23 00:16:51 crc kubenswrapper[4953]: I0223 00:16:51.299710 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-x7skn" Feb 23 00:16:52 crc kubenswrapper[4953]: I0223 00:16:52.974818 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-qwp7h"] Feb 23 00:16:52 crc kubenswrapper[4953]: I0223 00:16:52.975723 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-qwp7h" Feb 23 00:16:52 crc kubenswrapper[4953]: I0223 00:16:52.977455 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 23 00:16:52 crc kubenswrapper[4953]: I0223 00:16:52.977880 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 23 00:16:52 crc kubenswrapper[4953]: I0223 00:16:52.978901 4953 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-hpfwz" Feb 23 00:16:52 crc kubenswrapper[4953]: I0223 00:16:52.987889 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-qwp7h"] Feb 23 00:16:53 crc kubenswrapper[4953]: I0223 00:16:53.084737 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0064d40b-b7d9-4ecc-bd41-9fc67b90f860-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-qwp7h\" (UID: \"0064d40b-b7d9-4ecc-bd41-9fc67b90f860\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-qwp7h" Feb 23 00:16:53 crc kubenswrapper[4953]: I0223 00:16:53.085138 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhb5t\" (UniqueName: \"kubernetes.io/projected/0064d40b-b7d9-4ecc-bd41-9fc67b90f860-kube-api-access-zhb5t\") pod \"cert-manager-operator-controller-manager-5586865c96-qwp7h\" (UID: \"0064d40b-b7d9-4ecc-bd41-9fc67b90f860\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-qwp7h" Feb 23 00:16:53 crc kubenswrapper[4953]: I0223 00:16:53.186313 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhb5t\" (UniqueName: \"kubernetes.io/projected/0064d40b-b7d9-4ecc-bd41-9fc67b90f860-kube-api-access-zhb5t\") pod \"cert-manager-operator-controller-manager-5586865c96-qwp7h\" (UID: \"0064d40b-b7d9-4ecc-bd41-9fc67b90f860\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-qwp7h" Feb 23 00:16:53 crc kubenswrapper[4953]: I0223 00:16:53.186366 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0064d40b-b7d9-4ecc-bd41-9fc67b90f860-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-qwp7h\" (UID: \"0064d40b-b7d9-4ecc-bd41-9fc67b90f860\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-qwp7h" Feb 23 00:16:53 crc kubenswrapper[4953]: I0223 00:16:53.186815 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0064d40b-b7d9-4ecc-bd41-9fc67b90f860-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-qwp7h\" (UID: \"0064d40b-b7d9-4ecc-bd41-9fc67b90f860\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-qwp7h" Feb 23 00:16:53 crc kubenswrapper[4953]: I0223 00:16:53.208532 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhb5t\" (UniqueName: \"kubernetes.io/projected/0064d40b-b7d9-4ecc-bd41-9fc67b90f860-kube-api-access-zhb5t\") pod \"cert-manager-operator-controller-manager-5586865c96-qwp7h\" (UID: \"0064d40b-b7d9-4ecc-bd41-9fc67b90f860\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-qwp7h" Feb 23 00:16:53 crc kubenswrapper[4953]: I0223 00:16:53.291908 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-qwp7h" Feb 23 00:16:53 crc kubenswrapper[4953]: I0223 00:16:53.725433 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-qwp7h"] Feb 23 00:16:53 crc kubenswrapper[4953]: W0223 00:16:53.731877 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0064d40b_b7d9_4ecc_bd41_9fc67b90f860.slice/crio-9bbf537f15435bded0ecb47f5f005ca3f16802918da6235b701898a614630e2f WatchSource:0}: Error finding container 9bbf537f15435bded0ecb47f5f005ca3f16802918da6235b701898a614630e2f: Status 404 returned error can't find the container with id 9bbf537f15435bded0ecb47f5f005ca3f16802918da6235b701898a614630e2f Feb 23 00:16:54 crc kubenswrapper[4953]: I0223 00:16:54.365883 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-qwp7h" event={"ID":"0064d40b-b7d9-4ecc-bd41-9fc67b90f860","Type":"ContainerStarted","Data":"9bbf537f15435bded0ecb47f5f005ca3f16802918da6235b701898a614630e2f"} Feb 23 00:16:58 crc kubenswrapper[4953]: I0223 00:16:58.389686 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-qwp7h" event={"ID":"0064d40b-b7d9-4ecc-bd41-9fc67b90f860","Type":"ContainerStarted","Data":"0fa5853fe3b160a7b797e4e8760a1d33a45a64f41000a07a8970c8daaa285f9a"} Feb 23 00:16:58 crc kubenswrapper[4953]: I0223 00:16:58.417298 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-qwp7h" podStartSLOduration=2.5560472819999998 podStartE2EDuration="6.417264281s" podCreationTimestamp="2026-02-23 00:16:52 +0000 UTC" firstStartedPulling="2026-02-23 00:16:53.7344718 +0000 UTC m=+611.668313646" lastFinishedPulling="2026-02-23 00:16:57.595688799 +0000 UTC m=+615.529530645" observedRunningTime="2026-02-23 00:16:58.413711727 +0000 UTC m=+616.347553583" watchObservedRunningTime="2026-02-23 00:16:58.417264281 +0000 UTC m=+616.351106137" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.893562 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.894554 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.898188 4953 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.900141 4953 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-zp4tn" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.900326 4953 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.900543 4953 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.900617 4953 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.900910 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.901103 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.901190 4953 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.904065 4953 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.931868 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.982940 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.983030 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.983056 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.983091 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.983122 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.983187 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.983223 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.983248 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.983271 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.983312 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.983341 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.983364 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.983396 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.983418 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:16:59 crc kubenswrapper[4953]: I0223 00:16:59.983442 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.085011 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.085083 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.085116 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.085147 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.085195 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.085240 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.085267 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.085416 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.085450 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.085470 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.085493 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.085512 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.085533 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.085555 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.085586 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.085655 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.086078 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.086392 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.086803 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.087266 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.087538 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.087634 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.087805 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.097588 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.097991 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.098503 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.099957 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.100555 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.114153 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.117584 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f4670eb4-d5d9-4f15-b7c3-49ee33ba7412-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.212844 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:00 crc kubenswrapper[4953]: I0223 00:17:00.671601 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 23 00:17:01 crc kubenswrapper[4953]: I0223 00:17:01.416243 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412","Type":"ContainerStarted","Data":"0684207723fc9f3e130b67d7ea158b1f06ab0db5b58ddbe2a6275f030e29d6c5"} Feb 23 00:17:01 crc kubenswrapper[4953]: I0223 00:17:01.768119 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-z2kb8"] Feb 23 00:17:01 crc kubenswrapper[4953]: I0223 00:17:01.768797 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-z2kb8" Feb 23 00:17:01 crc kubenswrapper[4953]: W0223 00:17:01.771260 4953 reflector.go:561] object-"cert-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "cert-manager": no relationship found between node 'crc' and this object Feb 23 00:17:01 crc kubenswrapper[4953]: E0223 00:17:01.771336 4953 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"cert-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 23 00:17:01 crc kubenswrapper[4953]: W0223 00:17:01.771416 4953 reflector.go:561] object-"cert-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "cert-manager": no relationship found between node 'crc' and this object Feb 23 00:17:01 crc kubenswrapper[4953]: E0223 00:17:01.771440 4953 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"cert-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 23 00:17:01 crc kubenswrapper[4953]: W0223 00:17:01.778847 4953 reflector.go:561] object-"cert-manager"/"cert-manager-webhook-dockercfg-768h6": failed to list *v1.Secret: secrets "cert-manager-webhook-dockercfg-768h6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "cert-manager": no relationship found between node 'crc' and this object Feb 23 00:17:01 crc kubenswrapper[4953]: E0223 00:17:01.778878 4953 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-768h6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-manager-webhook-dockercfg-768h6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"cert-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 23 00:17:01 crc kubenswrapper[4953]: I0223 00:17:01.801247 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-z2kb8"] Feb 23 00:17:01 crc kubenswrapper[4953]: I0223 00:17:01.820592 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdbhs\" (UniqueName: \"kubernetes.io/projected/5f1a348d-9fbc-45fc-9308-00a3201cc0c7-kube-api-access-cdbhs\") pod \"cert-manager-webhook-6888856db4-z2kb8\" (UID: \"5f1a348d-9fbc-45fc-9308-00a3201cc0c7\") " pod="cert-manager/cert-manager-webhook-6888856db4-z2kb8" Feb 23 00:17:01 crc kubenswrapper[4953]: I0223 00:17:01.820882 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f1a348d-9fbc-45fc-9308-00a3201cc0c7-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-z2kb8\" (UID: \"5f1a348d-9fbc-45fc-9308-00a3201cc0c7\") " pod="cert-manager/cert-manager-webhook-6888856db4-z2kb8" Feb 23 00:17:01 crc kubenswrapper[4953]: I0223 00:17:01.922028 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdbhs\" (UniqueName: \"kubernetes.io/projected/5f1a348d-9fbc-45fc-9308-00a3201cc0c7-kube-api-access-cdbhs\") pod \"cert-manager-webhook-6888856db4-z2kb8\" (UID: \"5f1a348d-9fbc-45fc-9308-00a3201cc0c7\") " pod="cert-manager/cert-manager-webhook-6888856db4-z2kb8" Feb 23 00:17:01 crc kubenswrapper[4953]: I0223 00:17:01.922082 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f1a348d-9fbc-45fc-9308-00a3201cc0c7-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-z2kb8\" (UID: \"5f1a348d-9fbc-45fc-9308-00a3201cc0c7\") " pod="cert-manager/cert-manager-webhook-6888856db4-z2kb8" Feb 23 00:17:01 crc kubenswrapper[4953]: I0223 00:17:01.940987 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f1a348d-9fbc-45fc-9308-00a3201cc0c7-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-z2kb8\" (UID: \"5f1a348d-9fbc-45fc-9308-00a3201cc0c7\") " pod="cert-manager/cert-manager-webhook-6888856db4-z2kb8" Feb 23 00:17:02 crc kubenswrapper[4953]: I0223 00:17:02.614038 4953 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-768h6" Feb 23 00:17:02 crc kubenswrapper[4953]: I0223 00:17:02.800427 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 23 00:17:02 crc kubenswrapper[4953]: I0223 00:17:02.833173 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 23 00:17:02 crc kubenswrapper[4953]: I0223 00:17:02.843380 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdbhs\" (UniqueName: \"kubernetes.io/projected/5f1a348d-9fbc-45fc-9308-00a3201cc0c7-kube-api-access-cdbhs\") pod \"cert-manager-webhook-6888856db4-z2kb8\" (UID: \"5f1a348d-9fbc-45fc-9308-00a3201cc0c7\") " pod="cert-manager/cert-manager-webhook-6888856db4-z2kb8" Feb 23 00:17:02 crc kubenswrapper[4953]: I0223 00:17:02.844069 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-8tcxx"] Feb 23 00:17:02 crc kubenswrapper[4953]: I0223 00:17:02.845173 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-8tcxx" Feb 23 00:17:02 crc kubenswrapper[4953]: I0223 00:17:02.850474 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-8tcxx"] Feb 23 00:17:02 crc kubenswrapper[4953]: I0223 00:17:02.864712 4953 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-kwfr5" Feb 23 00:17:02 crc kubenswrapper[4953]: I0223 00:17:02.944155 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f3fa8cc-7b5d-4e59-824d-8247391c15d7-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-8tcxx\" (UID: \"9f3fa8cc-7b5d-4e59-824d-8247391c15d7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-8tcxx" Feb 23 00:17:02 crc kubenswrapper[4953]: I0223 00:17:02.944511 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsdkt\" (UniqueName: \"kubernetes.io/projected/9f3fa8cc-7b5d-4e59-824d-8247391c15d7-kube-api-access-fsdkt\") pod \"cert-manager-cainjector-5545bd876-8tcxx\" (UID: \"9f3fa8cc-7b5d-4e59-824d-8247391c15d7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-8tcxx" Feb 23 00:17:02 crc kubenswrapper[4953]: I0223 00:17:02.997098 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-z2kb8" Feb 23 00:17:03 crc kubenswrapper[4953]: I0223 00:17:03.047088 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f3fa8cc-7b5d-4e59-824d-8247391c15d7-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-8tcxx\" (UID: \"9f3fa8cc-7b5d-4e59-824d-8247391c15d7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-8tcxx" Feb 23 00:17:03 crc kubenswrapper[4953]: I0223 00:17:03.047142 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsdkt\" (UniqueName: \"kubernetes.io/projected/9f3fa8cc-7b5d-4e59-824d-8247391c15d7-kube-api-access-fsdkt\") pod \"cert-manager-cainjector-5545bd876-8tcxx\" (UID: \"9f3fa8cc-7b5d-4e59-824d-8247391c15d7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-8tcxx" Feb 23 00:17:03 crc kubenswrapper[4953]: I0223 00:17:03.076114 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsdkt\" (UniqueName: \"kubernetes.io/projected/9f3fa8cc-7b5d-4e59-824d-8247391c15d7-kube-api-access-fsdkt\") pod \"cert-manager-cainjector-5545bd876-8tcxx\" (UID: \"9f3fa8cc-7b5d-4e59-824d-8247391c15d7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-8tcxx" Feb 23 00:17:03 crc kubenswrapper[4953]: I0223 00:17:03.082006 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f3fa8cc-7b5d-4e59-824d-8247391c15d7-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-8tcxx\" (UID: \"9f3fa8cc-7b5d-4e59-824d-8247391c15d7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-8tcxx" Feb 23 00:17:03 crc kubenswrapper[4953]: I0223 00:17:03.208114 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-8tcxx" Feb 23 00:17:03 crc kubenswrapper[4953]: I0223 00:17:03.357439 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-z2kb8"] Feb 23 00:17:03 crc kubenswrapper[4953]: I0223 00:17:03.447192 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-z2kb8" event={"ID":"5f1a348d-9fbc-45fc-9308-00a3201cc0c7","Type":"ContainerStarted","Data":"ba46d216a564fdc73b9200554b13d170775334fd6107ce963d5c319b45f1ef1f"} Feb 23 00:17:03 crc kubenswrapper[4953]: I0223 00:17:03.548641 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-8tcxx"] Feb 23 00:17:04 crc kubenswrapper[4953]: I0223 00:17:04.463881 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-8tcxx" event={"ID":"9f3fa8cc-7b5d-4e59-824d-8247391c15d7","Type":"ContainerStarted","Data":"4f5c9558d4b78d665ba371761094c3bef076942b1ec2c367a82f71fda49d8f76"} Feb 23 00:17:14 crc kubenswrapper[4953]: I0223 00:17:14.277863 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-zvds4"] Feb 23 00:17:14 crc kubenswrapper[4953]: I0223 00:17:14.279380 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-zvds4" Feb 23 00:17:14 crc kubenswrapper[4953]: I0223 00:17:14.283164 4953 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-nb8j7" Feb 23 00:17:14 crc kubenswrapper[4953]: I0223 00:17:14.307978 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-zvds4"] Feb 23 00:17:14 crc kubenswrapper[4953]: I0223 00:17:14.420404 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f21ba8e-cf20-4d5b-97ee-70ba7583a380-bound-sa-token\") pod \"cert-manager-545d4d4674-zvds4\" (UID: \"7f21ba8e-cf20-4d5b-97ee-70ba7583a380\") " pod="cert-manager/cert-manager-545d4d4674-zvds4" Feb 23 00:17:14 crc kubenswrapper[4953]: I0223 00:17:14.420606 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk8rc\" (UniqueName: \"kubernetes.io/projected/7f21ba8e-cf20-4d5b-97ee-70ba7583a380-kube-api-access-dk8rc\") pod \"cert-manager-545d4d4674-zvds4\" (UID: \"7f21ba8e-cf20-4d5b-97ee-70ba7583a380\") " pod="cert-manager/cert-manager-545d4d4674-zvds4" Feb 23 00:17:14 crc kubenswrapper[4953]: I0223 00:17:14.522708 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f21ba8e-cf20-4d5b-97ee-70ba7583a380-bound-sa-token\") pod \"cert-manager-545d4d4674-zvds4\" (UID: \"7f21ba8e-cf20-4d5b-97ee-70ba7583a380\") " pod="cert-manager/cert-manager-545d4d4674-zvds4" Feb 23 00:17:14 crc kubenswrapper[4953]: I0223 00:17:14.523612 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk8rc\" (UniqueName: \"kubernetes.io/projected/7f21ba8e-cf20-4d5b-97ee-70ba7583a380-kube-api-access-dk8rc\") pod \"cert-manager-545d4d4674-zvds4\" (UID: \"7f21ba8e-cf20-4d5b-97ee-70ba7583a380\") " pod="cert-manager/cert-manager-545d4d4674-zvds4" Feb 23 00:17:14 crc kubenswrapper[4953]: I0223 00:17:14.548387 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f21ba8e-cf20-4d5b-97ee-70ba7583a380-bound-sa-token\") pod \"cert-manager-545d4d4674-zvds4\" (UID: \"7f21ba8e-cf20-4d5b-97ee-70ba7583a380\") " pod="cert-manager/cert-manager-545d4d4674-zvds4" Feb 23 00:17:14 crc kubenswrapper[4953]: I0223 00:17:14.554052 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk8rc\" (UniqueName: \"kubernetes.io/projected/7f21ba8e-cf20-4d5b-97ee-70ba7583a380-kube-api-access-dk8rc\") pod \"cert-manager-545d4d4674-zvds4\" (UID: \"7f21ba8e-cf20-4d5b-97ee-70ba7583a380\") " pod="cert-manager/cert-manager-545d4d4674-zvds4" Feb 23 00:17:14 crc kubenswrapper[4953]: I0223 00:17:14.607086 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-zvds4" Feb 23 00:17:14 crc kubenswrapper[4953]: I0223 00:17:14.700889 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:17:14 crc kubenswrapper[4953]: I0223 00:17:14.700987 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:17:14 crc kubenswrapper[4953]: I0223 00:17:14.701069 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:17:14 crc kubenswrapper[4953]: I0223 00:17:14.702025 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ebcf43b0252f85e33937cd0598c95271871ca36e48dd4c099739b60157b6192"} pod="openshift-machine-config-operator/machine-config-daemon-gpl86" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 00:17:14 crc kubenswrapper[4953]: I0223 00:17:14.702100 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" containerID="cri-o://6ebcf43b0252f85e33937cd0598c95271871ca36e48dd4c099739b60157b6192" gracePeriod=600 Feb 23 00:17:15 crc kubenswrapper[4953]: I0223 00:17:15.556836 4953 generic.go:334] "Generic (PLEG): container finished" podID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerID="6ebcf43b0252f85e33937cd0598c95271871ca36e48dd4c099739b60157b6192" exitCode=0 Feb 23 00:17:15 crc kubenswrapper[4953]: I0223 00:17:15.556948 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" event={"ID":"fca7afa5-1274-436d-ab61-9e8796e4774c","Type":"ContainerDied","Data":"6ebcf43b0252f85e33937cd0598c95271871ca36e48dd4c099739b60157b6192"} Feb 23 00:17:15 crc kubenswrapper[4953]: I0223 00:17:15.557449 4953 scope.go:117] "RemoveContainer" containerID="e85dfaed3628c17b672280ee0d620d6df9b175b1e8985b9cad3e96240e250b5d" Feb 23 00:17:19 crc kubenswrapper[4953]: E0223 00:17:19.790105 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.connect.redhat.com/elastic/elasticsearch:7.17.20" Feb 23 00:17:19 crc kubenswrapper[4953]: E0223 00:17:19.791712 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:elastic-internal-init-filesystem,Image:registry.connect.redhat.com/elastic/elasticsearch:7.17.20,Command:[bash -c /mnt/elastic-internal/scripts/prepare-fs.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:HEADLESS_SERVICE_NAME,Value:elasticsearch-es-default,ValueFrom:nil,},EnvVar{Name:PROBE_PASSWORD_PATH,Value:/mnt/elastic-internal/pod-mounted-users/elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:PROBE_USERNAME,Value:elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:READINESS_PROBE_PROTOCOL,Value:https,ValueFrom:nil,},EnvVar{Name:NSS_SDB_USE_CACHE,Value:no,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:downward-api,ReadOnly:true,MountPath:/mnt/elastic-internal/downward-api,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-bin-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-bin-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config,ReadOnly:true,MountPath:/mnt/elastic-internal/elasticsearch-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-config-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-plugins-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-plugins-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-http-certificates,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/http-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-probe-user,ReadOnly:true,MountPath:/mnt/elastic-internal/pod-mounted-users,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-remote-certificate-authorities,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/transport-remote-certs/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-scripts,ReadOnly:true,MountPath:/mnt/elastic-internal/scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-transport-certificates,ReadOnly:true,MountPath:/mnt/elastic-internal/transport-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-unicast-hosts,ReadOnly:true,MountPath:/mnt/elastic-internal/unicast-hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-xpack-file-realm,ReadOnly:true,MountPath:/mnt/elastic-internal/xpack-file-realm,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-data,ReadOnly:false,MountPath:/usr/share/elasticsearch/data,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-logs,ReadOnly:false,MountPath:/usr/share/elasticsearch/logs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod elasticsearch-es-default-0_service-telemetry(f4670eb4-d5d9-4f15-b7c3-49ee33ba7412): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 00:17:19 crc kubenswrapper[4953]: E0223 00:17:19.793357 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="f4670eb4-d5d9-4f15-b7c3-49ee33ba7412" Feb 23 00:17:19 crc kubenswrapper[4953]: I0223 00:17:19.829346 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-zvds4"] Feb 23 00:17:19 crc kubenswrapper[4953]: W0223 00:17:19.834017 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f21ba8e_cf20_4d5b_97ee_70ba7583a380.slice/crio-e705622431b280adbf82f9f375b38cbc1b177d8edb950fdef4546f684b350d82 WatchSource:0}: Error finding container e705622431b280adbf82f9f375b38cbc1b177d8edb950fdef4546f684b350d82: Status 404 returned error can't find the container with id e705622431b280adbf82f9f375b38cbc1b177d8edb950fdef4546f684b350d82 Feb 23 00:17:20 crc kubenswrapper[4953]: I0223 00:17:20.599167 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-8tcxx" event={"ID":"9f3fa8cc-7b5d-4e59-824d-8247391c15d7","Type":"ContainerStarted","Data":"585b1aecba6042457aa48ed73ecbc559e158f5f2feb93904b67ca999873ff691"} Feb 23 00:17:20 crc kubenswrapper[4953]: I0223 00:17:20.601977 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-zvds4" event={"ID":"7f21ba8e-cf20-4d5b-97ee-70ba7583a380","Type":"ContainerStarted","Data":"da4c6e2899d0d5131e3ddc4960d67ac7dbac3cbb69ccf7aa896d68c31db656c7"} Feb 23 00:17:20 crc kubenswrapper[4953]: I0223 00:17:20.602052 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-zvds4" event={"ID":"7f21ba8e-cf20-4d5b-97ee-70ba7583a380","Type":"ContainerStarted","Data":"e705622431b280adbf82f9f375b38cbc1b177d8edb950fdef4546f684b350d82"} Feb 23 00:17:20 crc kubenswrapper[4953]: I0223 00:17:20.606575 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" event={"ID":"fca7afa5-1274-436d-ab61-9e8796e4774c","Type":"ContainerStarted","Data":"f49e743a8b3d2741657144ee2c2b4859d85b42b2b42be220c9172980ca78223a"} Feb 23 00:17:20 crc kubenswrapper[4953]: I0223 00:17:20.608853 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-z2kb8" event={"ID":"5f1a348d-9fbc-45fc-9308-00a3201cc0c7","Type":"ContainerStarted","Data":"6dc273225e6e9705fa12b27a47b0f00843198300eb453cabf109ab898e4d8c7a"} Feb 23 00:17:20 crc kubenswrapper[4953]: I0223 00:17:20.608999 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-z2kb8" Feb 23 00:17:20 crc kubenswrapper[4953]: E0223 00:17:20.611044 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="f4670eb4-d5d9-4f15-b7c3-49ee33ba7412" Feb 23 00:17:20 crc kubenswrapper[4953]: I0223 00:17:20.629385 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-8tcxx" podStartSLOduration=2.543142224 podStartE2EDuration="18.629363929s" podCreationTimestamp="2026-02-23 00:17:02 +0000 UTC" firstStartedPulling="2026-02-23 00:17:03.569078805 +0000 UTC m=+621.502920651" lastFinishedPulling="2026-02-23 00:17:19.65530052 +0000 UTC m=+637.589142356" observedRunningTime="2026-02-23 00:17:20.627916974 +0000 UTC m=+638.561758820" watchObservedRunningTime="2026-02-23 00:17:20.629363929 +0000 UTC m=+638.563205795" Feb 23 00:17:20 crc kubenswrapper[4953]: I0223 00:17:20.664459 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-z2kb8" podStartSLOduration=3.45051687 podStartE2EDuration="19.664441812s" podCreationTimestamp="2026-02-23 00:17:01 +0000 UTC" firstStartedPulling="2026-02-23 00:17:03.378940934 +0000 UTC m=+621.312782780" lastFinishedPulling="2026-02-23 00:17:19.592865876 +0000 UTC m=+637.526707722" observedRunningTime="2026-02-23 00:17:20.660824236 +0000 UTC m=+638.594666132" watchObservedRunningTime="2026-02-23 00:17:20.664441812 +0000 UTC m=+638.598283648" Feb 23 00:17:20 crc kubenswrapper[4953]: I0223 00:17:20.746151 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-zvds4" podStartSLOduration=6.746131435 podStartE2EDuration="6.746131435s" podCreationTimestamp="2026-02-23 00:17:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 00:17:20.741950675 +0000 UTC m=+638.675792561" watchObservedRunningTime="2026-02-23 00:17:20.746131435 +0000 UTC m=+638.679973281" Feb 23 00:17:20 crc kubenswrapper[4953]: I0223 00:17:20.941143 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 23 00:17:20 crc kubenswrapper[4953]: I0223 00:17:20.974034 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 23 00:17:21 crc kubenswrapper[4953]: E0223 00:17:21.616853 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="f4670eb4-d5d9-4f15-b7c3-49ee33ba7412" Feb 23 00:17:22 crc kubenswrapper[4953]: E0223 00:17:22.623362 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="f4670eb4-d5d9-4f15-b7c3-49ee33ba7412" Feb 23 00:17:28 crc kubenswrapper[4953]: I0223 00:17:28.001072 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-z2kb8" Feb 23 00:17:39 crc kubenswrapper[4953]: I0223 00:17:39.758685 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412","Type":"ContainerStarted","Data":"be1fa70917af655d04eca78962c0d22079810a21f08dde08db603c54ca9b520f"} Feb 23 00:17:41 crc kubenswrapper[4953]: I0223 00:17:41.774940 4953 generic.go:334] "Generic (PLEG): container finished" podID="f4670eb4-d5d9-4f15-b7c3-49ee33ba7412" containerID="be1fa70917af655d04eca78962c0d22079810a21f08dde08db603c54ca9b520f" exitCode=0 Feb 23 00:17:41 crc kubenswrapper[4953]: I0223 00:17:41.774998 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412","Type":"ContainerDied","Data":"be1fa70917af655d04eca78962c0d22079810a21f08dde08db603c54ca9b520f"} Feb 23 00:17:42 crc kubenswrapper[4953]: I0223 00:17:42.784257 4953 generic.go:334] "Generic (PLEG): container finished" podID="f4670eb4-d5d9-4f15-b7c3-49ee33ba7412" containerID="174206aa44141efe6bedec7595dfaab69a4d2128e195b44b925c0d67cbf925c4" exitCode=0 Feb 23 00:17:42 crc kubenswrapper[4953]: I0223 00:17:42.784360 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412","Type":"ContainerDied","Data":"174206aa44141efe6bedec7595dfaab69a4d2128e195b44b925c0d67cbf925c4"} Feb 23 00:17:43 crc kubenswrapper[4953]: I0223 00:17:43.794165 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f4670eb4-d5d9-4f15-b7c3-49ee33ba7412","Type":"ContainerStarted","Data":"eaf3b7b152c2cced30190bcdf11c8f53be95d92657ac15690f15aaad9bdbb212"} Feb 23 00:17:43 crc kubenswrapper[4953]: I0223 00:17:43.794806 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:17:43 crc kubenswrapper[4953]: I0223 00:17:43.850588 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=6.352429716 podStartE2EDuration="44.850573697s" podCreationTimestamp="2026-02-23 00:16:59 +0000 UTC" firstStartedPulling="2026-02-23 00:17:00.680446299 +0000 UTC m=+618.614288145" lastFinishedPulling="2026-02-23 00:17:39.17859028 +0000 UTC m=+657.112432126" observedRunningTime="2026-02-23 00:17:43.847170038 +0000 UTC m=+661.781011894" watchObservedRunningTime="2026-02-23 00:17:43.850573697 +0000 UTC m=+661.784415543" Feb 23 00:17:43 crc kubenswrapper[4953]: I0223 00:17:43.925431 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 23 00:17:43 crc kubenswrapper[4953]: I0223 00:17:43.926588 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:43 crc kubenswrapper[4953]: I0223 00:17:43.928737 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-global-ca" Feb 23 00:17:43 crc kubenswrapper[4953]: I0223 00:17:43.929165 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-sys-config" Feb 23 00:17:43 crc kubenswrapper[4953]: I0223 00:17:43.929225 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-ca" Feb 23 00:17:43 crc kubenswrapper[4953]: I0223 00:17:43.931271 4953 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Feb 23 00:17:43 crc kubenswrapper[4953]: I0223 00:17:43.938369 4953 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-lhng4" Feb 23 00:17:43 crc kubenswrapper[4953]: I0223 00:17:43.962411 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.022405 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-builder-dockercfg-lhng4-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.022494 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.022532 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-builder-dockercfg-lhng4-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.022565 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54ppb\" (UniqueName: \"kubernetes.io/projected/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-kube-api-access-54ppb\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.022628 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.022653 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.022704 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.022771 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.022800 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.022847 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.022873 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.022897 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.023021 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.124539 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.124607 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-builder-dockercfg-lhng4-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.124636 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.124663 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-builder-dockercfg-lhng4-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.124694 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54ppb\" (UniqueName: \"kubernetes.io/projected/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-kube-api-access-54ppb\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.124715 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.124742 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.124769 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.124798 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.124822 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.124868 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.124915 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.124938 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.125344 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.125416 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.125429 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.125582 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.125703 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.125901 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.126309 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.126719 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.127846 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.131950 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-builder-dockercfg-lhng4-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.132421 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.133309 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-builder-dockercfg-lhng4-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.144164 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54ppb\" (UniqueName: \"kubernetes.io/projected/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-kube-api-access-54ppb\") pod \"service-telemetry-framework-index-1-build\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.240755 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.539867 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 23 00:17:44 crc kubenswrapper[4953]: I0223 00:17:44.800683 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb","Type":"ContainerStarted","Data":"c8e339830afddf72d1d9d30a2a3962edf7300c2ea46473fa5eefa52b331ad741"} Feb 23 00:17:51 crc kubenswrapper[4953]: I0223 00:17:51.859470 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb","Type":"ContainerStarted","Data":"135124a3f6c326211d22e6e1053d1454495cb35f0f086156f66b20e775ae2dd6"} Feb 23 00:17:51 crc kubenswrapper[4953]: E0223 00:17:51.941149 4953 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=433498091424607691, SKID=, AKID=9F:BC:64:0D:4A:D6:62:48:B6:A9:28:74:12:CC:5A:BF:A3:88:41:9E failed: x509: certificate signed by unknown authority" Feb 23 00:17:52 crc kubenswrapper[4953]: I0223 00:17:52.970159 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 23 00:17:53 crc kubenswrapper[4953]: I0223 00:17:53.873021 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-1-build" podUID="24ad97d8-a85c-42df-9c7a-9a7a3b604dcb" containerName="git-clone" containerID="cri-o://135124a3f6c326211d22e6e1053d1454495cb35f0f086156f66b20e775ae2dd6" gracePeriod=30 Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.293269 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_24ad97d8-a85c-42df-9c7a-9a7a3b604dcb/git-clone/0.log" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.293623 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.410978 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-node-pullsecrets\") pod \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.411060 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-container-storage-run\") pod \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.411089 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-proxy-ca-bundles\") pod \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.411130 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-builder-dockercfg-lhng4-pull\") pod \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.411173 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-system-configs\") pod \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.411159 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb" (UID: "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.411195 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-buildworkdir\") pod \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.411348 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-buildcachedir\") pod \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.411467 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.411499 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-container-storage-root\") pod \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.411535 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb" (UID: "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.411586 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-ca-bundles\") pod \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.411610 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-blob-cache\") pod \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.411642 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54ppb\" (UniqueName: \"kubernetes.io/projected/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-kube-api-access-54ppb\") pod \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.411682 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-builder-dockercfg-lhng4-push\") pod \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\" (UID: \"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb\") " Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.411711 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb" (UID: "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.411990 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb" (UID: "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.412251 4953 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.412266 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb" (UID: "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.412278 4953 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.412309 4953 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.412322 4953 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.413622 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb" (UID: "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.413649 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb" (UID: "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.413936 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb" (UID: "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.414464 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb" (UID: "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.418996 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-builder-dockercfg-lhng4-pull" (OuterVolumeSpecName: "builder-dockercfg-lhng4-pull") pod "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb" (UID: "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb"). InnerVolumeSpecName "builder-dockercfg-lhng4-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.419016 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb" (UID: "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.419148 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-kube-api-access-54ppb" (OuterVolumeSpecName: "kube-api-access-54ppb") pod "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb" (UID: "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb"). InnerVolumeSpecName "kube-api-access-54ppb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.419777 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-builder-dockercfg-lhng4-push" (OuterVolumeSpecName: "builder-dockercfg-lhng4-push") pod "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb" (UID: "24ad97d8-a85c-42df-9c7a-9a7a3b604dcb"). InnerVolumeSpecName "builder-dockercfg-lhng4-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.513760 4953 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-builder-dockercfg-lhng4-pull\") on node \"crc\" DevicePath \"\"" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.513819 4953 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.513837 4953 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.513856 4953 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.513873 4953 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.513890 4953 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.513907 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54ppb\" (UniqueName: \"kubernetes.io/projected/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-kube-api-access-54ppb\") on node \"crc\" DevicePath \"\"" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.513924 4953 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-builder-dockercfg-lhng4-push\") on node \"crc\" DevicePath \"\"" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.513941 4953 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.898076 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_24ad97d8-a85c-42df-9c7a-9a7a3b604dcb/git-clone/0.log" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.898137 4953 generic.go:334] "Generic (PLEG): container finished" podID="24ad97d8-a85c-42df-9c7a-9a7a3b604dcb" containerID="135124a3f6c326211d22e6e1053d1454495cb35f0f086156f66b20e775ae2dd6" exitCode=1 Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.898170 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb","Type":"ContainerDied","Data":"135124a3f6c326211d22e6e1053d1454495cb35f0f086156f66b20e775ae2dd6"} Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.898212 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"24ad97d8-a85c-42df-9c7a-9a7a3b604dcb","Type":"ContainerDied","Data":"c8e339830afddf72d1d9d30a2a3962edf7300c2ea46473fa5eefa52b331ad741"} Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.898235 4953 scope.go:117] "RemoveContainer" containerID="135124a3f6c326211d22e6e1053d1454495cb35f0f086156f66b20e775ae2dd6" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.898255 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.920808 4953 scope.go:117] "RemoveContainer" containerID="135124a3f6c326211d22e6e1053d1454495cb35f0f086156f66b20e775ae2dd6" Feb 23 00:17:54 crc kubenswrapper[4953]: E0223 00:17:54.921150 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"135124a3f6c326211d22e6e1053d1454495cb35f0f086156f66b20e775ae2dd6\": container with ID starting with 135124a3f6c326211d22e6e1053d1454495cb35f0f086156f66b20e775ae2dd6 not found: ID does not exist" containerID="135124a3f6c326211d22e6e1053d1454495cb35f0f086156f66b20e775ae2dd6" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.921210 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"135124a3f6c326211d22e6e1053d1454495cb35f0f086156f66b20e775ae2dd6"} err="failed to get container status \"135124a3f6c326211d22e6e1053d1454495cb35f0f086156f66b20e775ae2dd6\": rpc error: code = NotFound desc = could not find container \"135124a3f6c326211d22e6e1053d1454495cb35f0f086156f66b20e775ae2dd6\": container with ID starting with 135124a3f6c326211d22e6e1053d1454495cb35f0f086156f66b20e775ae2dd6 not found: ID does not exist" Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.935337 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 23 00:17:54 crc kubenswrapper[4953]: I0223 00:17:54.938015 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 23 00:17:55 crc kubenswrapper[4953]: I0223 00:17:55.320279 4953 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="f4670eb4-d5d9-4f15-b7c3-49ee33ba7412" containerName="elasticsearch" probeResult="failure" output=< Feb 23 00:17:55 crc kubenswrapper[4953]: {"timestamp": "2026-02-23T00:17:55+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 23 00:17:55 crc kubenswrapper[4953]: > Feb 23 00:17:55 crc kubenswrapper[4953]: I0223 00:17:55.350191 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24ad97d8-a85c-42df-9c7a-9a7a3b604dcb" path="/var/lib/kubelet/pods/24ad97d8-a85c-42df-9c7a-9a7a3b604dcb/volumes" Feb 23 00:18:00 crc kubenswrapper[4953]: I0223 00:18:00.598133 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.399642 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Feb 23 00:18:04 crc kubenswrapper[4953]: E0223 00:18:04.400413 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ad97d8-a85c-42df-9c7a-9a7a3b604dcb" containerName="git-clone" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.400431 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ad97d8-a85c-42df-9c7a-9a7a3b604dcb" containerName="git-clone" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.400614 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ad97d8-a85c-42df-9c7a-9a7a3b604dcb" containerName="git-clone" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.401585 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.403937 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-2-ca" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.404139 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-2-sys-config" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.404393 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-2-global-ca" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.404488 4953 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.408112 4953 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-lhng4" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.451969 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.469110 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.469192 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-builder-dockercfg-lhng4-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.469229 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.469278 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.469477 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.469516 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.469578 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-builder-dockercfg-lhng4-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.469619 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.469674 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.469717 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kphl\" (UniqueName: \"kubernetes.io/projected/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-kube-api-access-4kphl\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.469755 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.469787 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.469817 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.571350 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.571413 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.571452 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.571480 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.571500 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-builder-dockercfg-lhng4-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.571522 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.571567 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.571591 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kphl\" (UniqueName: \"kubernetes.io/projected/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-kube-api-access-4kphl\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.571624 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.571651 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.571672 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.571715 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-builder-dockercfg-lhng4-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.571744 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.571782 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.571987 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.572041 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.572269 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.572484 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.572035 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.572715 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.572800 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.572818 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.578516 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.578803 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-builder-dockercfg-lhng4-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.581764 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-builder-dockercfg-lhng4-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.601873 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kphl\" (UniqueName: \"kubernetes.io/projected/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-kube-api-access-4kphl\") pod \"service-telemetry-framework-index-2-build\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.718877 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:04 crc kubenswrapper[4953]: I0223 00:18:04.973620 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Feb 23 00:18:05 crc kubenswrapper[4953]: I0223 00:18:05.965877 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb","Type":"ContainerStarted","Data":"7c2b27e6209f5bd3c66c4db1f6cca73fc15f42abf779aa093ef47c90089399f5"} Feb 23 00:18:05 crc kubenswrapper[4953]: I0223 00:18:05.966221 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb","Type":"ContainerStarted","Data":"5581350b7ecf6b3751c52f45ba11b478290615d6b05d6757108e6c59c4db07dd"} Feb 23 00:18:06 crc kubenswrapper[4953]: E0223 00:18:06.032245 4953 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=433498091424607691, SKID=, AKID=9F:BC:64:0D:4A:D6:62:48:B6:A9:28:74:12:CC:5A:BF:A3:88:41:9E failed: x509: certificate signed by unknown authority" Feb 23 00:18:07 crc kubenswrapper[4953]: I0223 00:18:07.055589 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Feb 23 00:18:07 crc kubenswrapper[4953]: I0223 00:18:07.976637 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-2-build" podUID="fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb" containerName="git-clone" containerID="cri-o://7c2b27e6209f5bd3c66c4db1f6cca73fc15f42abf779aa093ef47c90089399f5" gracePeriod=30 Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.400100 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb/git-clone/0.log" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.400559 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.521978 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-system-configs\") pod \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.522049 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-builder-dockercfg-lhng4-pull\") pod \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.522069 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-container-storage-run\") pod \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.522109 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.522132 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kphl\" (UniqueName: \"kubernetes.io/projected/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-kube-api-access-4kphl\") pod \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.522149 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-buildworkdir\") pod \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.522181 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-ca-bundles\") pod \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.522241 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-buildcachedir\") pod \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.522272 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-proxy-ca-bundles\") pod \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.522307 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-container-storage-root\") pod \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.522326 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-node-pullsecrets\") pod \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.522343 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-builder-dockercfg-lhng4-push\") pod \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.522361 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-blob-cache\") pod \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\" (UID: \"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb\") " Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.522538 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb" (UID: "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.522712 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb" (UID: "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.522885 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb" (UID: "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.522891 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb" (UID: "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.523098 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb" (UID: "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.523122 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb" (UID: "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.523147 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb" (UID: "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.523161 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb" (UID: "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.523230 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb" (UID: "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.527242 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-kube-api-access-4kphl" (OuterVolumeSpecName: "kube-api-access-4kphl") pod "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb" (UID: "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb"). InnerVolumeSpecName "kube-api-access-4kphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.527327 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-builder-dockercfg-lhng4-pull" (OuterVolumeSpecName: "builder-dockercfg-lhng4-pull") pod "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb" (UID: "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb"). InnerVolumeSpecName "builder-dockercfg-lhng4-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.528391 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-builder-dockercfg-lhng4-push" (OuterVolumeSpecName: "builder-dockercfg-lhng4-push") pod "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb" (UID: "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb"). InnerVolumeSpecName "builder-dockercfg-lhng4-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.528410 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb" (UID: "fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.623887 4953 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.623925 4953 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.623934 4953 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-builder-dockercfg-lhng4-push\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.623942 4953 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.623951 4953 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.623959 4953 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-builder-dockercfg-lhng4-pull\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.623967 4953 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.623975 4953 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.623985 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kphl\" (UniqueName: \"kubernetes.io/projected/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-kube-api-access-4kphl\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.623993 4953 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.624001 4953 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.624009 4953 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.624016 4953 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.987470 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb/git-clone/0.log" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.987904 4953 generic.go:334] "Generic (PLEG): container finished" podID="fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb" containerID="7c2b27e6209f5bd3c66c4db1f6cca73fc15f42abf779aa093ef47c90089399f5" exitCode=1 Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.987956 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb","Type":"ContainerDied","Data":"7c2b27e6209f5bd3c66c4db1f6cca73fc15f42abf779aa093ef47c90089399f5"} Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.988007 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb","Type":"ContainerDied","Data":"5581350b7ecf6b3751c52f45ba11b478290615d6b05d6757108e6c59c4db07dd"} Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.988041 4953 scope.go:117] "RemoveContainer" containerID="7c2b27e6209f5bd3c66c4db1f6cca73fc15f42abf779aa093ef47c90089399f5" Feb 23 00:18:08 crc kubenswrapper[4953]: I0223 00:18:08.988084 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 23 00:18:09 crc kubenswrapper[4953]: I0223 00:18:09.007806 4953 scope.go:117] "RemoveContainer" containerID="7c2b27e6209f5bd3c66c4db1f6cca73fc15f42abf779aa093ef47c90089399f5" Feb 23 00:18:09 crc kubenswrapper[4953]: E0223 00:18:09.008358 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c2b27e6209f5bd3c66c4db1f6cca73fc15f42abf779aa093ef47c90089399f5\": container with ID starting with 7c2b27e6209f5bd3c66c4db1f6cca73fc15f42abf779aa093ef47c90089399f5 not found: ID does not exist" containerID="7c2b27e6209f5bd3c66c4db1f6cca73fc15f42abf779aa093ef47c90089399f5" Feb 23 00:18:09 crc kubenswrapper[4953]: I0223 00:18:09.008406 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c2b27e6209f5bd3c66c4db1f6cca73fc15f42abf779aa093ef47c90089399f5"} err="failed to get container status \"7c2b27e6209f5bd3c66c4db1f6cca73fc15f42abf779aa093ef47c90089399f5\": rpc error: code = NotFound desc = could not find container \"7c2b27e6209f5bd3c66c4db1f6cca73fc15f42abf779aa093ef47c90089399f5\": container with ID starting with 7c2b27e6209f5bd3c66c4db1f6cca73fc15f42abf779aa093ef47c90089399f5 not found: ID does not exist" Feb 23 00:18:09 crc kubenswrapper[4953]: I0223 00:18:09.029772 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Feb 23 00:18:09 crc kubenswrapper[4953]: I0223 00:18:09.035248 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Feb 23 00:18:09 crc kubenswrapper[4953]: I0223 00:18:09.333143 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb" path="/var/lib/kubelet/pods/fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb/volumes" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.537206 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Feb 23 00:18:18 crc kubenswrapper[4953]: E0223 00:18:18.537713 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb" containerName="git-clone" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.537726 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb" containerName="git-clone" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.537844 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa1c379c-2d7c-4d3a-9cd8-0c160f4ba2eb" containerName="git-clone" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.538646 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.540867 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-3-global-ca" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.541070 4953 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-lhng4" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.542045 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-3-sys-config" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.542372 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-3-ca" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.543521 4953 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.564938 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.649564 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.649701 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/59aea676-041b-4eda-9003-ba1859f5458c-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.649781 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/59aea676-041b-4eda-9003-ba1859f5458c-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.649838 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/59aea676-041b-4eda-9003-ba1859f5458c-builder-dockercfg-lhng4-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.649867 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59aea676-041b-4eda-9003-ba1859f5458c-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.649887 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b94g8\" (UniqueName: \"kubernetes.io/projected/59aea676-041b-4eda-9003-ba1859f5458c-kube-api-access-b94g8\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.649904 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/59aea676-041b-4eda-9003-ba1859f5458c-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.649923 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.649964 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/59aea676-041b-4eda-9003-ba1859f5458c-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.650026 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/59aea676-041b-4eda-9003-ba1859f5458c-builder-dockercfg-lhng4-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.650044 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.650067 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59aea676-041b-4eda-9003-ba1859f5458c-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.650161 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.751764 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/59aea676-041b-4eda-9003-ba1859f5458c-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.751824 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/59aea676-041b-4eda-9003-ba1859f5458c-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.751854 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/59aea676-041b-4eda-9003-ba1859f5458c-builder-dockercfg-lhng4-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.751873 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/59aea676-041b-4eda-9003-ba1859f5458c-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.751905 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59aea676-041b-4eda-9003-ba1859f5458c-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.751920 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b94g8\" (UniqueName: \"kubernetes.io/projected/59aea676-041b-4eda-9003-ba1859f5458c-kube-api-access-b94g8\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.751939 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.751957 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/59aea676-041b-4eda-9003-ba1859f5458c-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.751956 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/59aea676-041b-4eda-9003-ba1859f5458c-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.751981 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/59aea676-041b-4eda-9003-ba1859f5458c-builder-dockercfg-lhng4-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.751999 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.752020 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59aea676-041b-4eda-9003-ba1859f5458c-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.752052 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.752082 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.752138 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/59aea676-041b-4eda-9003-ba1859f5458c-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.752702 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59aea676-041b-4eda-9003-ba1859f5458c-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.752706 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.752808 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.752919 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.752996 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.753329 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/59aea676-041b-4eda-9003-ba1859f5458c-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.753412 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59aea676-041b-4eda-9003-ba1859f5458c-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.758046 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/59aea676-041b-4eda-9003-ba1859f5458c-builder-dockercfg-lhng4-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.758175 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/59aea676-041b-4eda-9003-ba1859f5458c-builder-dockercfg-lhng4-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.758337 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/59aea676-041b-4eda-9003-ba1859f5458c-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.783373 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b94g8\" (UniqueName: \"kubernetes.io/projected/59aea676-041b-4eda-9003-ba1859f5458c-kube-api-access-b94g8\") pod \"service-telemetry-framework-index-3-build\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:18 crc kubenswrapper[4953]: I0223 00:18:18.854271 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:19 crc kubenswrapper[4953]: I0223 00:18:19.073387 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Feb 23 00:18:20 crc kubenswrapper[4953]: I0223 00:18:20.064379 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"59aea676-041b-4eda-9003-ba1859f5458c","Type":"ContainerStarted","Data":"c8d7ae6379da25bf64707ab2fc217031f71c47f70074217615cd941da4a038a1"} Feb 23 00:18:20 crc kubenswrapper[4953]: I0223 00:18:20.064706 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"59aea676-041b-4eda-9003-ba1859f5458c","Type":"ContainerStarted","Data":"fff465a6cb046b8fae6fc5dc895e78e4b45003cb2c8031d2bf2dbe3d11c18d99"} Feb 23 00:18:20 crc kubenswrapper[4953]: E0223 00:18:20.134258 4953 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=433498091424607691, SKID=, AKID=9F:BC:64:0D:4A:D6:62:48:B6:A9:28:74:12:CC:5A:BF:A3:88:41:9E failed: x509: certificate signed by unknown authority" Feb 23 00:18:21 crc kubenswrapper[4953]: I0223 00:18:21.166710 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.080990 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-3-build" podUID="59aea676-041b-4eda-9003-ba1859f5458c" containerName="git-clone" containerID="cri-o://c8d7ae6379da25bf64707ab2fc217031f71c47f70074217615cd941da4a038a1" gracePeriod=30 Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.551701 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_59aea676-041b-4eda-9003-ba1859f5458c/git-clone/0.log" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.551864 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.611003 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-container-storage-run\") pod \"59aea676-041b-4eda-9003-ba1859f5458c\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.611076 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-container-storage-root\") pod \"59aea676-041b-4eda-9003-ba1859f5458c\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.611109 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/59aea676-041b-4eda-9003-ba1859f5458c-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"59aea676-041b-4eda-9003-ba1859f5458c\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.611144 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-build-blob-cache\") pod \"59aea676-041b-4eda-9003-ba1859f5458c\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.611173 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59aea676-041b-4eda-9003-ba1859f5458c-build-ca-bundles\") pod \"59aea676-041b-4eda-9003-ba1859f5458c\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.611247 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/59aea676-041b-4eda-9003-ba1859f5458c-node-pullsecrets\") pod \"59aea676-041b-4eda-9003-ba1859f5458c\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.611329 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/59aea676-041b-4eda-9003-ba1859f5458c-build-system-configs\") pod \"59aea676-041b-4eda-9003-ba1859f5458c\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.611384 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-buildworkdir\") pod \"59aea676-041b-4eda-9003-ba1859f5458c\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.611434 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/59aea676-041b-4eda-9003-ba1859f5458c-builder-dockercfg-lhng4-pull\") pod \"59aea676-041b-4eda-9003-ba1859f5458c\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.611447 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "59aea676-041b-4eda-9003-ba1859f5458c" (UID: "59aea676-041b-4eda-9003-ba1859f5458c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.611493 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b94g8\" (UniqueName: \"kubernetes.io/projected/59aea676-041b-4eda-9003-ba1859f5458c-kube-api-access-b94g8\") pod \"59aea676-041b-4eda-9003-ba1859f5458c\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.611549 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/59aea676-041b-4eda-9003-ba1859f5458c-builder-dockercfg-lhng4-push\") pod \"59aea676-041b-4eda-9003-ba1859f5458c\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.611586 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59aea676-041b-4eda-9003-ba1859f5458c-build-proxy-ca-bundles\") pod \"59aea676-041b-4eda-9003-ba1859f5458c\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.611619 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/59aea676-041b-4eda-9003-ba1859f5458c-buildcachedir\") pod \"59aea676-041b-4eda-9003-ba1859f5458c\" (UID: \"59aea676-041b-4eda-9003-ba1859f5458c\") " Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.611990 4953 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.611970 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59aea676-041b-4eda-9003-ba1859f5458c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "59aea676-041b-4eda-9003-ba1859f5458c" (UID: "59aea676-041b-4eda-9003-ba1859f5458c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.612037 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59aea676-041b-4eda-9003-ba1859f5458c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "59aea676-041b-4eda-9003-ba1859f5458c" (UID: "59aea676-041b-4eda-9003-ba1859f5458c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.612085 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "59aea676-041b-4eda-9003-ba1859f5458c" (UID: "59aea676-041b-4eda-9003-ba1859f5458c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.612109 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59aea676-041b-4eda-9003-ba1859f5458c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "59aea676-041b-4eda-9003-ba1859f5458c" (UID: "59aea676-041b-4eda-9003-ba1859f5458c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.612148 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "59aea676-041b-4eda-9003-ba1859f5458c" (UID: "59aea676-041b-4eda-9003-ba1859f5458c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.612637 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59aea676-041b-4eda-9003-ba1859f5458c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "59aea676-041b-4eda-9003-ba1859f5458c" (UID: "59aea676-041b-4eda-9003-ba1859f5458c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.612945 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59aea676-041b-4eda-9003-ba1859f5458c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "59aea676-041b-4eda-9003-ba1859f5458c" (UID: "59aea676-041b-4eda-9003-ba1859f5458c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.615725 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "59aea676-041b-4eda-9003-ba1859f5458c" (UID: "59aea676-041b-4eda-9003-ba1859f5458c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.618730 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59aea676-041b-4eda-9003-ba1859f5458c-builder-dockercfg-lhng4-pull" (OuterVolumeSpecName: "builder-dockercfg-lhng4-pull") pod "59aea676-041b-4eda-9003-ba1859f5458c" (UID: "59aea676-041b-4eda-9003-ba1859f5458c"). InnerVolumeSpecName "builder-dockercfg-lhng4-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.619080 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59aea676-041b-4eda-9003-ba1859f5458c-builder-dockercfg-lhng4-push" (OuterVolumeSpecName: "builder-dockercfg-lhng4-push") pod "59aea676-041b-4eda-9003-ba1859f5458c" (UID: "59aea676-041b-4eda-9003-ba1859f5458c"). InnerVolumeSpecName "builder-dockercfg-lhng4-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.621537 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59aea676-041b-4eda-9003-ba1859f5458c-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "59aea676-041b-4eda-9003-ba1859f5458c" (UID: "59aea676-041b-4eda-9003-ba1859f5458c"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.622681 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59aea676-041b-4eda-9003-ba1859f5458c-kube-api-access-b94g8" (OuterVolumeSpecName: "kube-api-access-b94g8") pod "59aea676-041b-4eda-9003-ba1859f5458c" (UID: "59aea676-041b-4eda-9003-ba1859f5458c"). InnerVolumeSpecName "kube-api-access-b94g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.713762 4953 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.713800 4953 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/59aea676-041b-4eda-9003-ba1859f5458c-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.713820 4953 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.713832 4953 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59aea676-041b-4eda-9003-ba1859f5458c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.713845 4953 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/59aea676-041b-4eda-9003-ba1859f5458c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.713857 4953 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/59aea676-041b-4eda-9003-ba1859f5458c-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.713869 4953 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/59aea676-041b-4eda-9003-ba1859f5458c-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.713881 4953 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/59aea676-041b-4eda-9003-ba1859f5458c-builder-dockercfg-lhng4-pull\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.713893 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b94g8\" (UniqueName: \"kubernetes.io/projected/59aea676-041b-4eda-9003-ba1859f5458c-kube-api-access-b94g8\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.713904 4953 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/59aea676-041b-4eda-9003-ba1859f5458c-builder-dockercfg-lhng4-push\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.713915 4953 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59aea676-041b-4eda-9003-ba1859f5458c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:22 crc kubenswrapper[4953]: I0223 00:18:22.713926 4953 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/59aea676-041b-4eda-9003-ba1859f5458c-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:23 crc kubenswrapper[4953]: I0223 00:18:23.093158 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_59aea676-041b-4eda-9003-ba1859f5458c/git-clone/0.log" Feb 23 00:18:23 crc kubenswrapper[4953]: I0223 00:18:23.093667 4953 generic.go:334] "Generic (PLEG): container finished" podID="59aea676-041b-4eda-9003-ba1859f5458c" containerID="c8d7ae6379da25bf64707ab2fc217031f71c47f70074217615cd941da4a038a1" exitCode=1 Feb 23 00:18:23 crc kubenswrapper[4953]: I0223 00:18:23.093714 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"59aea676-041b-4eda-9003-ba1859f5458c","Type":"ContainerDied","Data":"c8d7ae6379da25bf64707ab2fc217031f71c47f70074217615cd941da4a038a1"} Feb 23 00:18:23 crc kubenswrapper[4953]: I0223 00:18:23.093762 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"59aea676-041b-4eda-9003-ba1859f5458c","Type":"ContainerDied","Data":"fff465a6cb046b8fae6fc5dc895e78e4b45003cb2c8031d2bf2dbe3d11c18d99"} Feb 23 00:18:23 crc kubenswrapper[4953]: I0223 00:18:23.093776 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 23 00:18:23 crc kubenswrapper[4953]: I0223 00:18:23.093792 4953 scope.go:117] "RemoveContainer" containerID="c8d7ae6379da25bf64707ab2fc217031f71c47f70074217615cd941da4a038a1" Feb 23 00:18:23 crc kubenswrapper[4953]: I0223 00:18:23.128111 4953 scope.go:117] "RemoveContainer" containerID="c8d7ae6379da25bf64707ab2fc217031f71c47f70074217615cd941da4a038a1" Feb 23 00:18:23 crc kubenswrapper[4953]: E0223 00:18:23.128612 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8d7ae6379da25bf64707ab2fc217031f71c47f70074217615cd941da4a038a1\": container with ID starting with c8d7ae6379da25bf64707ab2fc217031f71c47f70074217615cd941da4a038a1 not found: ID does not exist" containerID="c8d7ae6379da25bf64707ab2fc217031f71c47f70074217615cd941da4a038a1" Feb 23 00:18:23 crc kubenswrapper[4953]: I0223 00:18:23.128653 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8d7ae6379da25bf64707ab2fc217031f71c47f70074217615cd941da4a038a1"} err="failed to get container status \"c8d7ae6379da25bf64707ab2fc217031f71c47f70074217615cd941da4a038a1\": rpc error: code = NotFound desc = could not find container \"c8d7ae6379da25bf64707ab2fc217031f71c47f70074217615cd941da4a038a1\": container with ID starting with c8d7ae6379da25bf64707ab2fc217031f71c47f70074217615cd941da4a038a1 not found: ID does not exist" Feb 23 00:18:23 crc kubenswrapper[4953]: I0223 00:18:23.140572 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Feb 23 00:18:23 crc kubenswrapper[4953]: I0223 00:18:23.153515 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Feb 23 00:18:23 crc kubenswrapper[4953]: I0223 00:18:23.337017 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59aea676-041b-4eda-9003-ba1859f5458c" path="/var/lib/kubelet/pods/59aea676-041b-4eda-9003-ba1859f5458c/volumes" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.560444 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Feb 23 00:18:32 crc kubenswrapper[4953]: E0223 00:18:32.561936 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59aea676-041b-4eda-9003-ba1859f5458c" containerName="git-clone" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.561962 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="59aea676-041b-4eda-9003-ba1859f5458c" containerName="git-clone" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.562161 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="59aea676-041b-4eda-9003-ba1859f5458c" containerName="git-clone" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.564773 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.567540 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-4-global-ca" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.567877 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-4-sys-config" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.568372 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-4-ca" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.568581 4953 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-lhng4" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.569371 4953 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.597210 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.718733 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/310ffc96-7769-4a5b-9f34-3350469e5391-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.718796 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.718868 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/310ffc96-7769-4a5b-9f34-3350469e5391-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.718914 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.718939 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/310ffc96-7769-4a5b-9f34-3350469e5391-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.718968 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.718998 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/310ffc96-7769-4a5b-9f34-3350469e5391-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.719047 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/310ffc96-7769-4a5b-9f34-3350469e5391-builder-dockercfg-lhng4-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.719079 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/310ffc96-7769-4a5b-9f34-3350469e5391-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.719133 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/310ffc96-7769-4a5b-9f34-3350469e5391-builder-dockercfg-lhng4-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.719166 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.719211 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fgwz\" (UniqueName: \"kubernetes.io/projected/310ffc96-7769-4a5b-9f34-3350469e5391-kube-api-access-7fgwz\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.719239 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/310ffc96-7769-4a5b-9f34-3350469e5391-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.820547 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/310ffc96-7769-4a5b-9f34-3350469e5391-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.820622 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.820664 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/310ffc96-7769-4a5b-9f34-3350469e5391-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.820704 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.820728 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/310ffc96-7769-4a5b-9f34-3350469e5391-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.820755 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.820786 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/310ffc96-7769-4a5b-9f34-3350469e5391-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.820817 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/310ffc96-7769-4a5b-9f34-3350469e5391-builder-dockercfg-lhng4-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.820844 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/310ffc96-7769-4a5b-9f34-3350469e5391-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.820879 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/310ffc96-7769-4a5b-9f34-3350469e5391-builder-dockercfg-lhng4-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.820916 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.820950 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fgwz\" (UniqueName: \"kubernetes.io/projected/310ffc96-7769-4a5b-9f34-3350469e5391-kube-api-access-7fgwz\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.820981 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/310ffc96-7769-4a5b-9f34-3350469e5391-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.821259 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.821566 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.821717 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.821962 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/310ffc96-7769-4a5b-9f34-3350469e5391-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.822127 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/310ffc96-7769-4a5b-9f34-3350469e5391-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.822189 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/310ffc96-7769-4a5b-9f34-3350469e5391-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.821929 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/310ffc96-7769-4a5b-9f34-3350469e5391-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.822532 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.824060 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/310ffc96-7769-4a5b-9f34-3350469e5391-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.830596 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/310ffc96-7769-4a5b-9f34-3350469e5391-builder-dockercfg-lhng4-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.832660 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/310ffc96-7769-4a5b-9f34-3350469e5391-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.834127 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/310ffc96-7769-4a5b-9f34-3350469e5391-builder-dockercfg-lhng4-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.847395 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fgwz\" (UniqueName: \"kubernetes.io/projected/310ffc96-7769-4a5b-9f34-3350469e5391-kube-api-access-7fgwz\") pod \"service-telemetry-framework-index-4-build\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:32 crc kubenswrapper[4953]: I0223 00:18:32.887549 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:33 crc kubenswrapper[4953]: I0223 00:18:33.120103 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Feb 23 00:18:33 crc kubenswrapper[4953]: I0223 00:18:33.176187 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"310ffc96-7769-4a5b-9f34-3350469e5391","Type":"ContainerStarted","Data":"d2e5b27c431dfbaa04c09514e83474f42f14daed4c9ec16d1c8d466a8e1c365f"} Feb 23 00:18:34 crc kubenswrapper[4953]: I0223 00:18:34.206078 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"310ffc96-7769-4a5b-9f34-3350469e5391","Type":"ContainerStarted","Data":"031ba78e7321f0d59c12e836ac3be14f67749275494eb97a56f338f5699e8a26"} Feb 23 00:18:34 crc kubenswrapper[4953]: E0223 00:18:34.282694 4953 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=433498091424607691, SKID=, AKID=9F:BC:64:0D:4A:D6:62:48:B6:A9:28:74:12:CC:5A:BF:A3:88:41:9E failed: x509: certificate signed by unknown authority" Feb 23 00:18:35 crc kubenswrapper[4953]: I0223 00:18:35.343780 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.222538 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-4-build" podUID="310ffc96-7769-4a5b-9f34-3350469e5391" containerName="git-clone" containerID="cri-o://031ba78e7321f0d59c12e836ac3be14f67749275494eb97a56f338f5699e8a26" gracePeriod=30 Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.433210 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-gwkj9"] Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.434060 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-gwkj9" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.436691 4953 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"infrawatch-operators-dockercfg-74cg4" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.449421 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-gwkj9"] Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.583829 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4p4t\" (UniqueName: \"kubernetes.io/projected/3bde6e9a-2daf-4e47-a25d-6901264419e6-kube-api-access-q4p4t\") pod \"infrawatch-operators-gwkj9\" (UID: \"3bde6e9a-2daf-4e47-a25d-6901264419e6\") " pod="service-telemetry/infrawatch-operators-gwkj9" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.652063 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_310ffc96-7769-4a5b-9f34-3350469e5391/git-clone/0.log" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.652144 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.685576 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4p4t\" (UniqueName: \"kubernetes.io/projected/3bde6e9a-2daf-4e47-a25d-6901264419e6-kube-api-access-q4p4t\") pod \"infrawatch-operators-gwkj9\" (UID: \"3bde6e9a-2daf-4e47-a25d-6901264419e6\") " pod="service-telemetry/infrawatch-operators-gwkj9" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.713425 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4p4t\" (UniqueName: \"kubernetes.io/projected/3bde6e9a-2daf-4e47-a25d-6901264419e6-kube-api-access-q4p4t\") pod \"infrawatch-operators-gwkj9\" (UID: \"3bde6e9a-2daf-4e47-a25d-6901264419e6\") " pod="service-telemetry/infrawatch-operators-gwkj9" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.786564 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-container-storage-root\") pod \"310ffc96-7769-4a5b-9f34-3350469e5391\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.786634 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/310ffc96-7769-4a5b-9f34-3350469e5391-builder-dockercfg-lhng4-pull\") pod \"310ffc96-7769-4a5b-9f34-3350469e5391\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.786680 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/310ffc96-7769-4a5b-9f34-3350469e5391-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"310ffc96-7769-4a5b-9f34-3350469e5391\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.786719 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-build-blob-cache\") pod \"310ffc96-7769-4a5b-9f34-3350469e5391\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.786765 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/310ffc96-7769-4a5b-9f34-3350469e5391-buildcachedir\") pod \"310ffc96-7769-4a5b-9f34-3350469e5391\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.786799 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/310ffc96-7769-4a5b-9f34-3350469e5391-build-system-configs\") pod \"310ffc96-7769-4a5b-9f34-3350469e5391\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.786857 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fgwz\" (UniqueName: \"kubernetes.io/projected/310ffc96-7769-4a5b-9f34-3350469e5391-kube-api-access-7fgwz\") pod \"310ffc96-7769-4a5b-9f34-3350469e5391\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.786926 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/310ffc96-7769-4a5b-9f34-3350469e5391-node-pullsecrets\") pod \"310ffc96-7769-4a5b-9f34-3350469e5391\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.786907 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/310ffc96-7769-4a5b-9f34-3350469e5391-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "310ffc96-7769-4a5b-9f34-3350469e5391" (UID: "310ffc96-7769-4a5b-9f34-3350469e5391"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.786972 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-buildworkdir\") pod \"310ffc96-7769-4a5b-9f34-3350469e5391\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.787003 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/310ffc96-7769-4a5b-9f34-3350469e5391-builder-dockercfg-lhng4-push\") pod \"310ffc96-7769-4a5b-9f34-3350469e5391\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.787049 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/310ffc96-7769-4a5b-9f34-3350469e5391-build-ca-bundles\") pod \"310ffc96-7769-4a5b-9f34-3350469e5391\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.787091 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-container-storage-run\") pod \"310ffc96-7769-4a5b-9f34-3350469e5391\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.787128 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/310ffc96-7769-4a5b-9f34-3350469e5391-build-proxy-ca-bundles\") pod \"310ffc96-7769-4a5b-9f34-3350469e5391\" (UID: \"310ffc96-7769-4a5b-9f34-3350469e5391\") " Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.787201 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/310ffc96-7769-4a5b-9f34-3350469e5391-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "310ffc96-7769-4a5b-9f34-3350469e5391" (UID: "310ffc96-7769-4a5b-9f34-3350469e5391"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.787427 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "310ffc96-7769-4a5b-9f34-3350469e5391" (UID: "310ffc96-7769-4a5b-9f34-3350469e5391"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.787449 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "310ffc96-7769-4a5b-9f34-3350469e5391" (UID: "310ffc96-7769-4a5b-9f34-3350469e5391"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.787559 4953 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.787601 4953 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.787610 4953 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/310ffc96-7769-4a5b-9f34-3350469e5391-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.787619 4953 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/310ffc96-7769-4a5b-9f34-3350469e5391-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.787876 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "310ffc96-7769-4a5b-9f34-3350469e5391" (UID: "310ffc96-7769-4a5b-9f34-3350469e5391"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.788024 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/310ffc96-7769-4a5b-9f34-3350469e5391-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "310ffc96-7769-4a5b-9f34-3350469e5391" (UID: "310ffc96-7769-4a5b-9f34-3350469e5391"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.788095 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/310ffc96-7769-4a5b-9f34-3350469e5391-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "310ffc96-7769-4a5b-9f34-3350469e5391" (UID: "310ffc96-7769-4a5b-9f34-3350469e5391"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.788304 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "310ffc96-7769-4a5b-9f34-3350469e5391" (UID: "310ffc96-7769-4a5b-9f34-3350469e5391"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.788654 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/310ffc96-7769-4a5b-9f34-3350469e5391-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "310ffc96-7769-4a5b-9f34-3350469e5391" (UID: "310ffc96-7769-4a5b-9f34-3350469e5391"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.790002 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/310ffc96-7769-4a5b-9f34-3350469e5391-kube-api-access-7fgwz" (OuterVolumeSpecName: "kube-api-access-7fgwz") pod "310ffc96-7769-4a5b-9f34-3350469e5391" (UID: "310ffc96-7769-4a5b-9f34-3350469e5391"). InnerVolumeSpecName "kube-api-access-7fgwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.790922 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310ffc96-7769-4a5b-9f34-3350469e5391-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "310ffc96-7769-4a5b-9f34-3350469e5391" (UID: "310ffc96-7769-4a5b-9f34-3350469e5391"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.791498 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310ffc96-7769-4a5b-9f34-3350469e5391-builder-dockercfg-lhng4-pull" (OuterVolumeSpecName: "builder-dockercfg-lhng4-pull") pod "310ffc96-7769-4a5b-9f34-3350469e5391" (UID: "310ffc96-7769-4a5b-9f34-3350469e5391"). InnerVolumeSpecName "builder-dockercfg-lhng4-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.792581 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310ffc96-7769-4a5b-9f34-3350469e5391-builder-dockercfg-lhng4-push" (OuterVolumeSpecName: "builder-dockercfg-lhng4-push") pod "310ffc96-7769-4a5b-9f34-3350469e5391" (UID: "310ffc96-7769-4a5b-9f34-3350469e5391"). InnerVolumeSpecName "builder-dockercfg-lhng4-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.813991 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-gwkj9" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.888500 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fgwz\" (UniqueName: \"kubernetes.io/projected/310ffc96-7769-4a5b-9f34-3350469e5391-kube-api-access-7fgwz\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.888537 4953 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.888550 4953 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-lhng4-push\" (UniqueName: \"kubernetes.io/secret/310ffc96-7769-4a5b-9f34-3350469e5391-builder-dockercfg-lhng4-push\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.888563 4953 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/310ffc96-7769-4a5b-9f34-3350469e5391-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.888575 4953 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/310ffc96-7769-4a5b-9f34-3350469e5391-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.888587 4953 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/310ffc96-7769-4a5b-9f34-3350469e5391-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.888600 4953 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-lhng4-pull\" (UniqueName: \"kubernetes.io/secret/310ffc96-7769-4a5b-9f34-3350469e5391-builder-dockercfg-lhng4-pull\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.888613 4953 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/310ffc96-7769-4a5b-9f34-3350469e5391-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:36 crc kubenswrapper[4953]: I0223 00:18:36.888631 4953 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/310ffc96-7769-4a5b-9f34-3350469e5391-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:37 crc kubenswrapper[4953]: I0223 00:18:37.079710 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-gwkj9"] Feb 23 00:18:37 crc kubenswrapper[4953]: W0223 00:18:37.089818 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bde6e9a_2daf_4e47_a25d_6901264419e6.slice/crio-fcafa651092476bb4f52f015e75a8090060a793e9163fc140a3fc8c31d21fff5 WatchSource:0}: Error finding container fcafa651092476bb4f52f015e75a8090060a793e9163fc140a3fc8c31d21fff5: Status 404 returned error can't find the container with id fcafa651092476bb4f52f015e75a8090060a793e9163fc140a3fc8c31d21fff5 Feb 23 00:18:37 crc kubenswrapper[4953]: E0223 00:18:37.127548 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 23 00:18:37 crc kubenswrapper[4953]: E0223 00:18:37.127728 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q4p4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-gwkj9_service-telemetry(3bde6e9a-2daf-4e47-a25d-6901264419e6): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 23 00:18:37 crc kubenswrapper[4953]: E0223 00:18:37.129059 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-gwkj9" podUID="3bde6e9a-2daf-4e47-a25d-6901264419e6" Feb 23 00:18:37 crc kubenswrapper[4953]: I0223 00:18:37.231849 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_310ffc96-7769-4a5b-9f34-3350469e5391/git-clone/0.log" Feb 23 00:18:37 crc kubenswrapper[4953]: I0223 00:18:37.231898 4953 generic.go:334] "Generic (PLEG): container finished" podID="310ffc96-7769-4a5b-9f34-3350469e5391" containerID="031ba78e7321f0d59c12e836ac3be14f67749275494eb97a56f338f5699e8a26" exitCode=1 Feb 23 00:18:37 crc kubenswrapper[4953]: I0223 00:18:37.231978 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"310ffc96-7769-4a5b-9f34-3350469e5391","Type":"ContainerDied","Data":"031ba78e7321f0d59c12e836ac3be14f67749275494eb97a56f338f5699e8a26"} Feb 23 00:18:37 crc kubenswrapper[4953]: I0223 00:18:37.232076 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"310ffc96-7769-4a5b-9f34-3350469e5391","Type":"ContainerDied","Data":"d2e5b27c431dfbaa04c09514e83474f42f14daed4c9ec16d1c8d466a8e1c365f"} Feb 23 00:18:37 crc kubenswrapper[4953]: I0223 00:18:37.232008 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Feb 23 00:18:37 crc kubenswrapper[4953]: I0223 00:18:37.232110 4953 scope.go:117] "RemoveContainer" containerID="031ba78e7321f0d59c12e836ac3be14f67749275494eb97a56f338f5699e8a26" Feb 23 00:18:37 crc kubenswrapper[4953]: I0223 00:18:37.234976 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-gwkj9" event={"ID":"3bde6e9a-2daf-4e47-a25d-6901264419e6","Type":"ContainerStarted","Data":"fcafa651092476bb4f52f015e75a8090060a793e9163fc140a3fc8c31d21fff5"} Feb 23 00:18:37 crc kubenswrapper[4953]: E0223 00:18:37.236630 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-gwkj9" podUID="3bde6e9a-2daf-4e47-a25d-6901264419e6" Feb 23 00:18:37 crc kubenswrapper[4953]: I0223 00:18:37.249166 4953 scope.go:117] "RemoveContainer" containerID="031ba78e7321f0d59c12e836ac3be14f67749275494eb97a56f338f5699e8a26" Feb 23 00:18:37 crc kubenswrapper[4953]: E0223 00:18:37.249934 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"031ba78e7321f0d59c12e836ac3be14f67749275494eb97a56f338f5699e8a26\": container with ID starting with 031ba78e7321f0d59c12e836ac3be14f67749275494eb97a56f338f5699e8a26 not found: ID does not exist" containerID="031ba78e7321f0d59c12e836ac3be14f67749275494eb97a56f338f5699e8a26" Feb 23 00:18:37 crc kubenswrapper[4953]: I0223 00:18:37.249998 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"031ba78e7321f0d59c12e836ac3be14f67749275494eb97a56f338f5699e8a26"} err="failed to get container status \"031ba78e7321f0d59c12e836ac3be14f67749275494eb97a56f338f5699e8a26\": rpc error: code = NotFound desc = could not find container \"031ba78e7321f0d59c12e836ac3be14f67749275494eb97a56f338f5699e8a26\": container with ID starting with 031ba78e7321f0d59c12e836ac3be14f67749275494eb97a56f338f5699e8a26 not found: ID does not exist" Feb 23 00:18:37 crc kubenswrapper[4953]: I0223 00:18:37.283496 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Feb 23 00:18:37 crc kubenswrapper[4953]: I0223 00:18:37.291235 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Feb 23 00:18:37 crc kubenswrapper[4953]: I0223 00:18:37.346301 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="310ffc96-7769-4a5b-9f34-3350469e5391" path="/var/lib/kubelet/pods/310ffc96-7769-4a5b-9f34-3350469e5391/volumes" Feb 23 00:18:38 crc kubenswrapper[4953]: E0223 00:18:38.246379 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-gwkj9" podUID="3bde6e9a-2daf-4e47-a25d-6901264419e6" Feb 23 00:18:41 crc kubenswrapper[4953]: I0223 00:18:41.033983 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-gwkj9"] Feb 23 00:18:41 crc kubenswrapper[4953]: I0223 00:18:41.329926 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-gwkj9" Feb 23 00:18:41 crc kubenswrapper[4953]: I0223 00:18:41.344926 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4p4t\" (UniqueName: \"kubernetes.io/projected/3bde6e9a-2daf-4e47-a25d-6901264419e6-kube-api-access-q4p4t\") pod \"3bde6e9a-2daf-4e47-a25d-6901264419e6\" (UID: \"3bde6e9a-2daf-4e47-a25d-6901264419e6\") " Feb 23 00:18:41 crc kubenswrapper[4953]: I0223 00:18:41.352572 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bde6e9a-2daf-4e47-a25d-6901264419e6-kube-api-access-q4p4t" (OuterVolumeSpecName: "kube-api-access-q4p4t") pod "3bde6e9a-2daf-4e47-a25d-6901264419e6" (UID: "3bde6e9a-2daf-4e47-a25d-6901264419e6"). InnerVolumeSpecName "kube-api-access-q4p4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:18:41 crc kubenswrapper[4953]: I0223 00:18:41.447259 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4p4t\" (UniqueName: \"kubernetes.io/projected/3bde6e9a-2daf-4e47-a25d-6901264419e6-kube-api-access-q4p4t\") on node \"crc\" DevicePath \"\"" Feb 23 00:18:41 crc kubenswrapper[4953]: I0223 00:18:41.839322 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-szrd7"] Feb 23 00:18:41 crc kubenswrapper[4953]: E0223 00:18:41.839633 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310ffc96-7769-4a5b-9f34-3350469e5391" containerName="git-clone" Feb 23 00:18:41 crc kubenswrapper[4953]: I0223 00:18:41.839648 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="310ffc96-7769-4a5b-9f34-3350469e5391" containerName="git-clone" Feb 23 00:18:41 crc kubenswrapper[4953]: I0223 00:18:41.839786 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="310ffc96-7769-4a5b-9f34-3350469e5391" containerName="git-clone" Feb 23 00:18:41 crc kubenswrapper[4953]: I0223 00:18:41.840280 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-szrd7" Feb 23 00:18:41 crc kubenswrapper[4953]: I0223 00:18:41.852591 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc9kl\" (UniqueName: \"kubernetes.io/projected/fcc5fd41-349f-4826-8b86-e7aef82e8f7b-kube-api-access-nc9kl\") pod \"infrawatch-operators-szrd7\" (UID: \"fcc5fd41-349f-4826-8b86-e7aef82e8f7b\") " pod="service-telemetry/infrawatch-operators-szrd7" Feb 23 00:18:41 crc kubenswrapper[4953]: I0223 00:18:41.860229 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-szrd7"] Feb 23 00:18:41 crc kubenswrapper[4953]: I0223 00:18:41.955016 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9kl\" (UniqueName: \"kubernetes.io/projected/fcc5fd41-349f-4826-8b86-e7aef82e8f7b-kube-api-access-nc9kl\") pod \"infrawatch-operators-szrd7\" (UID: \"fcc5fd41-349f-4826-8b86-e7aef82e8f7b\") " pod="service-telemetry/infrawatch-operators-szrd7" Feb 23 00:18:41 crc kubenswrapper[4953]: I0223 00:18:41.988099 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc9kl\" (UniqueName: \"kubernetes.io/projected/fcc5fd41-349f-4826-8b86-e7aef82e8f7b-kube-api-access-nc9kl\") pod \"infrawatch-operators-szrd7\" (UID: \"fcc5fd41-349f-4826-8b86-e7aef82e8f7b\") " pod="service-telemetry/infrawatch-operators-szrd7" Feb 23 00:18:42 crc kubenswrapper[4953]: I0223 00:18:42.205161 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-szrd7" Feb 23 00:18:42 crc kubenswrapper[4953]: I0223 00:18:42.286808 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-gwkj9" event={"ID":"3bde6e9a-2daf-4e47-a25d-6901264419e6","Type":"ContainerDied","Data":"fcafa651092476bb4f52f015e75a8090060a793e9163fc140a3fc8c31d21fff5"} Feb 23 00:18:42 crc kubenswrapper[4953]: I0223 00:18:42.286881 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-gwkj9" Feb 23 00:18:42 crc kubenswrapper[4953]: I0223 00:18:42.350937 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-gwkj9"] Feb 23 00:18:42 crc kubenswrapper[4953]: I0223 00:18:42.364209 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-gwkj9"] Feb 23 00:18:42 crc kubenswrapper[4953]: I0223 00:18:42.475579 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-szrd7"] Feb 23 00:18:42 crc kubenswrapper[4953]: E0223 00:18:42.515916 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 23 00:18:42 crc kubenswrapper[4953]: E0223 00:18:42.516168 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nc9kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-szrd7_service-telemetry(fcc5fd41-349f-4826-8b86-e7aef82e8f7b): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 23 00:18:42 crc kubenswrapper[4953]: E0223 00:18:42.517568 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:18:43 crc kubenswrapper[4953]: I0223 00:18:43.293588 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-szrd7" event={"ID":"fcc5fd41-349f-4826-8b86-e7aef82e8f7b","Type":"ContainerStarted","Data":"738b6a7283a8e7c4e1f570cf1729e43bcfe2c3ab4f03e8d64aae7f83ff0d2099"} Feb 23 00:18:43 crc kubenswrapper[4953]: E0223 00:18:43.295567 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:18:43 crc kubenswrapper[4953]: I0223 00:18:43.333189 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bde6e9a-2daf-4e47-a25d-6901264419e6" path="/var/lib/kubelet/pods/3bde6e9a-2daf-4e47-a25d-6901264419e6/volumes" Feb 23 00:18:44 crc kubenswrapper[4953]: E0223 00:18:44.302389 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:18:55 crc kubenswrapper[4953]: E0223 00:18:55.383057 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 23 00:18:55 crc kubenswrapper[4953]: E0223 00:18:55.384556 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nc9kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-szrd7_service-telemetry(fcc5fd41-349f-4826-8b86-e7aef82e8f7b): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 23 00:18:55 crc kubenswrapper[4953]: E0223 00:18:55.386407 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:19:09 crc kubenswrapper[4953]: E0223 00:19:09.328839 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:19:19 crc kubenswrapper[4953]: I0223 00:19:19.056432 4953 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 00:19:22 crc kubenswrapper[4953]: E0223 00:19:22.371580 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 23 00:19:22 crc kubenswrapper[4953]: E0223 00:19:22.372466 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nc9kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-szrd7_service-telemetry(fcc5fd41-349f-4826-8b86-e7aef82e8f7b): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 23 00:19:22 crc kubenswrapper[4953]: E0223 00:19:22.373728 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:19:35 crc kubenswrapper[4953]: E0223 00:19:35.329521 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:19:44 crc kubenswrapper[4953]: I0223 00:19:44.700078 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:19:44 crc kubenswrapper[4953]: I0223 00:19:44.700688 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:19:46 crc kubenswrapper[4953]: E0223 00:19:46.328830 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:19:59 crc kubenswrapper[4953]: E0223 00:19:59.331326 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:20:11 crc kubenswrapper[4953]: E0223 00:20:11.357739 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 23 00:20:11 crc kubenswrapper[4953]: E0223 00:20:11.358616 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nc9kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-szrd7_service-telemetry(fcc5fd41-349f-4826-8b86-e7aef82e8f7b): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 23 00:20:11 crc kubenswrapper[4953]: E0223 00:20:11.359946 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:20:14 crc kubenswrapper[4953]: I0223 00:20:14.700736 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:20:14 crc kubenswrapper[4953]: I0223 00:20:14.701406 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:20:25 crc kubenswrapper[4953]: E0223 00:20:25.328442 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:20:36 crc kubenswrapper[4953]: E0223 00:20:36.331518 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:20:44 crc kubenswrapper[4953]: I0223 00:20:44.700462 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:20:44 crc kubenswrapper[4953]: I0223 00:20:44.701198 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:20:44 crc kubenswrapper[4953]: I0223 00:20:44.701247 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:20:44 crc kubenswrapper[4953]: I0223 00:20:44.701712 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f49e743a8b3d2741657144ee2c2b4859d85b42b2b42be220c9172980ca78223a"} pod="openshift-machine-config-operator/machine-config-daemon-gpl86" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 00:20:44 crc kubenswrapper[4953]: I0223 00:20:44.701765 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" containerID="cri-o://f49e743a8b3d2741657144ee2c2b4859d85b42b2b42be220c9172980ca78223a" gracePeriod=600 Feb 23 00:20:45 crc kubenswrapper[4953]: I0223 00:20:45.152484 4953 generic.go:334] "Generic (PLEG): container finished" podID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerID="f49e743a8b3d2741657144ee2c2b4859d85b42b2b42be220c9172980ca78223a" exitCode=0 Feb 23 00:20:45 crc kubenswrapper[4953]: I0223 00:20:45.152532 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" event={"ID":"fca7afa5-1274-436d-ab61-9e8796e4774c","Type":"ContainerDied","Data":"f49e743a8b3d2741657144ee2c2b4859d85b42b2b42be220c9172980ca78223a"} Feb 23 00:20:45 crc kubenswrapper[4953]: I0223 00:20:45.152566 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" event={"ID":"fca7afa5-1274-436d-ab61-9e8796e4774c","Type":"ContainerStarted","Data":"0b9977e31f9f0b146ca16d9707802b2ffd298c08bbaca105db28ec645ce6eb46"} Feb 23 00:20:45 crc kubenswrapper[4953]: I0223 00:20:45.152586 4953 scope.go:117] "RemoveContainer" containerID="6ebcf43b0252f85e33937cd0598c95271871ca36e48dd4c099739b60157b6192" Feb 23 00:20:48 crc kubenswrapper[4953]: E0223 00:20:48.327884 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:21:00 crc kubenswrapper[4953]: E0223 00:21:00.332973 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:21:13 crc kubenswrapper[4953]: E0223 00:21:13.335728 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:21:25 crc kubenswrapper[4953]: E0223 00:21:25.329914 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:21:38 crc kubenswrapper[4953]: I0223 00:21:38.331795 4953 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 00:21:38 crc kubenswrapper[4953]: E0223 00:21:38.381557 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 23 00:21:38 crc kubenswrapper[4953]: E0223 00:21:38.381863 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nc9kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-szrd7_service-telemetry(fcc5fd41-349f-4826-8b86-e7aef82e8f7b): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 23 00:21:38 crc kubenswrapper[4953]: E0223 00:21:38.383461 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:21:46 crc kubenswrapper[4953]: I0223 00:21:46.248432 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vcl55"] Feb 23 00:21:46 crc kubenswrapper[4953]: I0223 00:21:46.252075 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vcl55" Feb 23 00:21:46 crc kubenswrapper[4953]: I0223 00:21:46.265606 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vcl55"] Feb 23 00:21:46 crc kubenswrapper[4953]: I0223 00:21:46.318615 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce80ac0f-af7e-4342-bf0f-bdc2f04032d3-catalog-content\") pod \"redhat-operators-vcl55\" (UID: \"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3\") " pod="openshift-marketplace/redhat-operators-vcl55" Feb 23 00:21:46 crc kubenswrapper[4953]: I0223 00:21:46.318700 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94d6k\" (UniqueName: \"kubernetes.io/projected/ce80ac0f-af7e-4342-bf0f-bdc2f04032d3-kube-api-access-94d6k\") pod \"redhat-operators-vcl55\" (UID: \"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3\") " pod="openshift-marketplace/redhat-operators-vcl55" Feb 23 00:21:46 crc kubenswrapper[4953]: I0223 00:21:46.318775 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce80ac0f-af7e-4342-bf0f-bdc2f04032d3-utilities\") pod \"redhat-operators-vcl55\" (UID: \"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3\") " pod="openshift-marketplace/redhat-operators-vcl55" Feb 23 00:21:46 crc kubenswrapper[4953]: I0223 00:21:46.420375 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce80ac0f-af7e-4342-bf0f-bdc2f04032d3-catalog-content\") pod \"redhat-operators-vcl55\" (UID: \"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3\") " pod="openshift-marketplace/redhat-operators-vcl55" Feb 23 00:21:46 crc kubenswrapper[4953]: I0223 00:21:46.420467 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94d6k\" (UniqueName: \"kubernetes.io/projected/ce80ac0f-af7e-4342-bf0f-bdc2f04032d3-kube-api-access-94d6k\") pod \"redhat-operators-vcl55\" (UID: \"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3\") " pod="openshift-marketplace/redhat-operators-vcl55" Feb 23 00:21:46 crc kubenswrapper[4953]: I0223 00:21:46.421023 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce80ac0f-af7e-4342-bf0f-bdc2f04032d3-utilities\") pod \"redhat-operators-vcl55\" (UID: \"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3\") " pod="openshift-marketplace/redhat-operators-vcl55" Feb 23 00:21:46 crc kubenswrapper[4953]: I0223 00:21:46.421051 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce80ac0f-af7e-4342-bf0f-bdc2f04032d3-catalog-content\") pod \"redhat-operators-vcl55\" (UID: \"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3\") " pod="openshift-marketplace/redhat-operators-vcl55" Feb 23 00:21:46 crc kubenswrapper[4953]: I0223 00:21:46.422020 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce80ac0f-af7e-4342-bf0f-bdc2f04032d3-utilities\") pod \"redhat-operators-vcl55\" (UID: \"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3\") " pod="openshift-marketplace/redhat-operators-vcl55" Feb 23 00:21:46 crc kubenswrapper[4953]: I0223 00:21:46.443878 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94d6k\" (UniqueName: \"kubernetes.io/projected/ce80ac0f-af7e-4342-bf0f-bdc2f04032d3-kube-api-access-94d6k\") pod \"redhat-operators-vcl55\" (UID: \"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3\") " pod="openshift-marketplace/redhat-operators-vcl55" Feb 23 00:21:46 crc kubenswrapper[4953]: I0223 00:21:46.633563 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vcl55" Feb 23 00:21:47 crc kubenswrapper[4953]: I0223 00:21:47.096206 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vcl55"] Feb 23 00:21:47 crc kubenswrapper[4953]: I0223 00:21:47.737146 4953 generic.go:334] "Generic (PLEG): container finished" podID="ce80ac0f-af7e-4342-bf0f-bdc2f04032d3" containerID="a8a1e3b0a401928c93e9ea6e7c92abfebdb79290e81ef775d9e60ee7d629cfa5" exitCode=0 Feb 23 00:21:47 crc kubenswrapper[4953]: I0223 00:21:47.737236 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcl55" event={"ID":"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3","Type":"ContainerDied","Data":"a8a1e3b0a401928c93e9ea6e7c92abfebdb79290e81ef775d9e60ee7d629cfa5"} Feb 23 00:21:47 crc kubenswrapper[4953]: I0223 00:21:47.738460 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcl55" event={"ID":"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3","Type":"ContainerStarted","Data":"cead27668cf0bb49e671b51926b34f95e7cabbfe4489c91172408bd38ab234c7"} Feb 23 00:21:48 crc kubenswrapper[4953]: I0223 00:21:48.749491 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcl55" event={"ID":"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3","Type":"ContainerStarted","Data":"989a8e361d46c3cb8884c6552e181c1057b3b39ff23b25f577ca49a8f0c7f320"} Feb 23 00:21:49 crc kubenswrapper[4953]: I0223 00:21:49.758694 4953 generic.go:334] "Generic (PLEG): container finished" podID="ce80ac0f-af7e-4342-bf0f-bdc2f04032d3" containerID="989a8e361d46c3cb8884c6552e181c1057b3b39ff23b25f577ca49a8f0c7f320" exitCode=0 Feb 23 00:21:49 crc kubenswrapper[4953]: I0223 00:21:49.758813 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcl55" event={"ID":"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3","Type":"ContainerDied","Data":"989a8e361d46c3cb8884c6552e181c1057b3b39ff23b25f577ca49a8f0c7f320"} Feb 23 00:21:50 crc kubenswrapper[4953]: I0223 00:21:50.767052 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcl55" event={"ID":"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3","Type":"ContainerStarted","Data":"24db6867b1d244a2850468f8a93063527c859b8dd261b8918a8d7ab85c457f00"} Feb 23 00:21:50 crc kubenswrapper[4953]: I0223 00:21:50.793824 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vcl55" podStartSLOduration=2.159116347 podStartE2EDuration="4.793797432s" podCreationTimestamp="2026-02-23 00:21:46 +0000 UTC" firstStartedPulling="2026-02-23 00:21:47.7392603 +0000 UTC m=+905.673102146" lastFinishedPulling="2026-02-23 00:21:50.373941385 +0000 UTC m=+908.307783231" observedRunningTime="2026-02-23 00:21:50.786955122 +0000 UTC m=+908.720796968" watchObservedRunningTime="2026-02-23 00:21:50.793797432 +0000 UTC m=+908.727639298" Feb 23 00:21:53 crc kubenswrapper[4953]: E0223 00:21:53.330563 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:21:56 crc kubenswrapper[4953]: I0223 00:21:56.634280 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vcl55" Feb 23 00:21:56 crc kubenswrapper[4953]: I0223 00:21:56.634636 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vcl55" Feb 23 00:21:56 crc kubenswrapper[4953]: I0223 00:21:56.673709 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vcl55" Feb 23 00:21:56 crc kubenswrapper[4953]: I0223 00:21:56.860978 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vcl55" Feb 23 00:21:56 crc kubenswrapper[4953]: I0223 00:21:56.919569 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vcl55"] Feb 23 00:21:58 crc kubenswrapper[4953]: I0223 00:21:58.812491 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vcl55" podUID="ce80ac0f-af7e-4342-bf0f-bdc2f04032d3" containerName="registry-server" containerID="cri-o://24db6867b1d244a2850468f8a93063527c859b8dd261b8918a8d7ab85c457f00" gracePeriod=2 Feb 23 00:21:59 crc kubenswrapper[4953]: I0223 00:21:59.317641 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5qr9h"] Feb 23 00:21:59 crc kubenswrapper[4953]: I0223 00:21:59.319021 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5qr9h" Feb 23 00:21:59 crc kubenswrapper[4953]: I0223 00:21:59.338225 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5qr9h"] Feb 23 00:21:59 crc kubenswrapper[4953]: I0223 00:21:59.517176 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0292b0c8-63d4-4a48-843d-8f7f955aca87-catalog-content\") pod \"certified-operators-5qr9h\" (UID: \"0292b0c8-63d4-4a48-843d-8f7f955aca87\") " pod="openshift-marketplace/certified-operators-5qr9h" Feb 23 00:21:59 crc kubenswrapper[4953]: I0223 00:21:59.517344 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0292b0c8-63d4-4a48-843d-8f7f955aca87-utilities\") pod \"certified-operators-5qr9h\" (UID: \"0292b0c8-63d4-4a48-843d-8f7f955aca87\") " pod="openshift-marketplace/certified-operators-5qr9h" Feb 23 00:21:59 crc kubenswrapper[4953]: I0223 00:21:59.517411 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwdcc\" (UniqueName: \"kubernetes.io/projected/0292b0c8-63d4-4a48-843d-8f7f955aca87-kube-api-access-zwdcc\") pod \"certified-operators-5qr9h\" (UID: \"0292b0c8-63d4-4a48-843d-8f7f955aca87\") " pod="openshift-marketplace/certified-operators-5qr9h" Feb 23 00:21:59 crc kubenswrapper[4953]: I0223 00:21:59.618845 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0292b0c8-63d4-4a48-843d-8f7f955aca87-catalog-content\") pod \"certified-operators-5qr9h\" (UID: \"0292b0c8-63d4-4a48-843d-8f7f955aca87\") " pod="openshift-marketplace/certified-operators-5qr9h" Feb 23 00:21:59 crc kubenswrapper[4953]: I0223 00:21:59.618933 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0292b0c8-63d4-4a48-843d-8f7f955aca87-utilities\") pod \"certified-operators-5qr9h\" (UID: \"0292b0c8-63d4-4a48-843d-8f7f955aca87\") " pod="openshift-marketplace/certified-operators-5qr9h" Feb 23 00:21:59 crc kubenswrapper[4953]: I0223 00:21:59.618968 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwdcc\" (UniqueName: \"kubernetes.io/projected/0292b0c8-63d4-4a48-843d-8f7f955aca87-kube-api-access-zwdcc\") pod \"certified-operators-5qr9h\" (UID: \"0292b0c8-63d4-4a48-843d-8f7f955aca87\") " pod="openshift-marketplace/certified-operators-5qr9h" Feb 23 00:21:59 crc kubenswrapper[4953]: I0223 00:21:59.619411 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0292b0c8-63d4-4a48-843d-8f7f955aca87-catalog-content\") pod \"certified-operators-5qr9h\" (UID: \"0292b0c8-63d4-4a48-843d-8f7f955aca87\") " pod="openshift-marketplace/certified-operators-5qr9h" Feb 23 00:21:59 crc kubenswrapper[4953]: I0223 00:21:59.619454 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0292b0c8-63d4-4a48-843d-8f7f955aca87-utilities\") pod \"certified-operators-5qr9h\" (UID: \"0292b0c8-63d4-4a48-843d-8f7f955aca87\") " pod="openshift-marketplace/certified-operators-5qr9h" Feb 23 00:21:59 crc kubenswrapper[4953]: I0223 00:21:59.638888 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwdcc\" (UniqueName: \"kubernetes.io/projected/0292b0c8-63d4-4a48-843d-8f7f955aca87-kube-api-access-zwdcc\") pod \"certified-operators-5qr9h\" (UID: \"0292b0c8-63d4-4a48-843d-8f7f955aca87\") " pod="openshift-marketplace/certified-operators-5qr9h" Feb 23 00:21:59 crc kubenswrapper[4953]: I0223 00:21:59.642817 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5qr9h" Feb 23 00:21:59 crc kubenswrapper[4953]: I0223 00:21:59.911274 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5qr9h"] Feb 23 00:22:00 crc kubenswrapper[4953]: I0223 00:22:00.828664 4953 generic.go:334] "Generic (PLEG): container finished" podID="0292b0c8-63d4-4a48-843d-8f7f955aca87" containerID="d82919395d8395fbed0ddf6f8e08f604385aaca67d8afd0b4705ca40578c56e8" exitCode=0 Feb 23 00:22:00 crc kubenswrapper[4953]: I0223 00:22:00.828736 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qr9h" event={"ID":"0292b0c8-63d4-4a48-843d-8f7f955aca87","Type":"ContainerDied","Data":"d82919395d8395fbed0ddf6f8e08f604385aaca67d8afd0b4705ca40578c56e8"} Feb 23 00:22:00 crc kubenswrapper[4953]: I0223 00:22:00.829067 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qr9h" event={"ID":"0292b0c8-63d4-4a48-843d-8f7f955aca87","Type":"ContainerStarted","Data":"70d62e975c79f6e9899f928ba797b1cf8a5e0cf05f300597aef8b33812dc523b"} Feb 23 00:22:00 crc kubenswrapper[4953]: I0223 00:22:00.834519 4953 generic.go:334] "Generic (PLEG): container finished" podID="ce80ac0f-af7e-4342-bf0f-bdc2f04032d3" containerID="24db6867b1d244a2850468f8a93063527c859b8dd261b8918a8d7ab85c457f00" exitCode=0 Feb 23 00:22:00 crc kubenswrapper[4953]: I0223 00:22:00.834580 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcl55" event={"ID":"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3","Type":"ContainerDied","Data":"24db6867b1d244a2850468f8a93063527c859b8dd261b8918a8d7ab85c457f00"} Feb 23 00:22:01 crc kubenswrapper[4953]: I0223 00:22:01.062138 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vcl55" Feb 23 00:22:01 crc kubenswrapper[4953]: I0223 00:22:01.171012 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94d6k\" (UniqueName: \"kubernetes.io/projected/ce80ac0f-af7e-4342-bf0f-bdc2f04032d3-kube-api-access-94d6k\") pod \"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3\" (UID: \"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3\") " Feb 23 00:22:01 crc kubenswrapper[4953]: I0223 00:22:01.171139 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce80ac0f-af7e-4342-bf0f-bdc2f04032d3-catalog-content\") pod \"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3\" (UID: \"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3\") " Feb 23 00:22:01 crc kubenswrapper[4953]: I0223 00:22:01.171170 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce80ac0f-af7e-4342-bf0f-bdc2f04032d3-utilities\") pod \"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3\" (UID: \"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3\") " Feb 23 00:22:01 crc kubenswrapper[4953]: I0223 00:22:01.172120 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce80ac0f-af7e-4342-bf0f-bdc2f04032d3-utilities" (OuterVolumeSpecName: "utilities") pod "ce80ac0f-af7e-4342-bf0f-bdc2f04032d3" (UID: "ce80ac0f-af7e-4342-bf0f-bdc2f04032d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:22:01 crc kubenswrapper[4953]: I0223 00:22:01.172366 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce80ac0f-af7e-4342-bf0f-bdc2f04032d3-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 00:22:01 crc kubenswrapper[4953]: I0223 00:22:01.177448 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce80ac0f-af7e-4342-bf0f-bdc2f04032d3-kube-api-access-94d6k" (OuterVolumeSpecName: "kube-api-access-94d6k") pod "ce80ac0f-af7e-4342-bf0f-bdc2f04032d3" (UID: "ce80ac0f-af7e-4342-bf0f-bdc2f04032d3"). InnerVolumeSpecName "kube-api-access-94d6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:22:01 crc kubenswrapper[4953]: I0223 00:22:01.273117 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94d6k\" (UniqueName: \"kubernetes.io/projected/ce80ac0f-af7e-4342-bf0f-bdc2f04032d3-kube-api-access-94d6k\") on node \"crc\" DevicePath \"\"" Feb 23 00:22:01 crc kubenswrapper[4953]: I0223 00:22:01.295983 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce80ac0f-af7e-4342-bf0f-bdc2f04032d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce80ac0f-af7e-4342-bf0f-bdc2f04032d3" (UID: "ce80ac0f-af7e-4342-bf0f-bdc2f04032d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:22:01 crc kubenswrapper[4953]: I0223 00:22:01.374304 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce80ac0f-af7e-4342-bf0f-bdc2f04032d3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 00:22:01 crc kubenswrapper[4953]: I0223 00:22:01.852983 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcl55" event={"ID":"ce80ac0f-af7e-4342-bf0f-bdc2f04032d3","Type":"ContainerDied","Data":"cead27668cf0bb49e671b51926b34f95e7cabbfe4489c91172408bd38ab234c7"} Feb 23 00:22:01 crc kubenswrapper[4953]: I0223 00:22:01.853081 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vcl55" Feb 23 00:22:01 crc kubenswrapper[4953]: I0223 00:22:01.853340 4953 scope.go:117] "RemoveContainer" containerID="24db6867b1d244a2850468f8a93063527c859b8dd261b8918a8d7ab85c457f00" Feb 23 00:22:01 crc kubenswrapper[4953]: I0223 00:22:01.871693 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vcl55"] Feb 23 00:22:01 crc kubenswrapper[4953]: I0223 00:22:01.875360 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vcl55"] Feb 23 00:22:01 crc kubenswrapper[4953]: I0223 00:22:01.889631 4953 scope.go:117] "RemoveContainer" containerID="989a8e361d46c3cb8884c6552e181c1057b3b39ff23b25f577ca49a8f0c7f320" Feb 23 00:22:01 crc kubenswrapper[4953]: I0223 00:22:01.975478 4953 scope.go:117] "RemoveContainer" containerID="a8a1e3b0a401928c93e9ea6e7c92abfebdb79290e81ef775d9e60ee7d629cfa5" Feb 23 00:22:02 crc kubenswrapper[4953]: I0223 00:22:02.860737 4953 generic.go:334] "Generic (PLEG): container finished" podID="0292b0c8-63d4-4a48-843d-8f7f955aca87" containerID="3e2180373ed67a3bee6a834b61a9e79e982554514728416eeafbfb9a0174d923" exitCode=0 Feb 23 00:22:02 crc kubenswrapper[4953]: I0223 00:22:02.860937 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qr9h" event={"ID":"0292b0c8-63d4-4a48-843d-8f7f955aca87","Type":"ContainerDied","Data":"3e2180373ed67a3bee6a834b61a9e79e982554514728416eeafbfb9a0174d923"} Feb 23 00:22:03 crc kubenswrapper[4953]: I0223 00:22:03.332745 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce80ac0f-af7e-4342-bf0f-bdc2f04032d3" path="/var/lib/kubelet/pods/ce80ac0f-af7e-4342-bf0f-bdc2f04032d3/volumes" Feb 23 00:22:03 crc kubenswrapper[4953]: I0223 00:22:03.868915 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qr9h" event={"ID":"0292b0c8-63d4-4a48-843d-8f7f955aca87","Type":"ContainerStarted","Data":"c969e94f5e0c82a8d68573ba15091540737457c6a260c78fb603deb0c68bd273"} Feb 23 00:22:03 crc kubenswrapper[4953]: I0223 00:22:03.888539 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5qr9h" podStartSLOduration=2.459535308 podStartE2EDuration="4.888513934s" podCreationTimestamp="2026-02-23 00:21:59 +0000 UTC" firstStartedPulling="2026-02-23 00:22:00.830474462 +0000 UTC m=+918.764316318" lastFinishedPulling="2026-02-23 00:22:03.259453098 +0000 UTC m=+921.193294944" observedRunningTime="2026-02-23 00:22:03.883543293 +0000 UTC m=+921.817385139" watchObservedRunningTime="2026-02-23 00:22:03.888513934 +0000 UTC m=+921.822355780" Feb 23 00:22:06 crc kubenswrapper[4953]: E0223 00:22:06.328807 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:22:09 crc kubenswrapper[4953]: I0223 00:22:09.643139 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5qr9h" Feb 23 00:22:09 crc kubenswrapper[4953]: I0223 00:22:09.643960 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5qr9h" Feb 23 00:22:09 crc kubenswrapper[4953]: I0223 00:22:09.703693 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5qr9h" Feb 23 00:22:09 crc kubenswrapper[4953]: I0223 00:22:09.957830 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5qr9h" Feb 23 00:22:10 crc kubenswrapper[4953]: I0223 00:22:10.015212 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5qr9h"] Feb 23 00:22:11 crc kubenswrapper[4953]: I0223 00:22:11.925772 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5qr9h" podUID="0292b0c8-63d4-4a48-843d-8f7f955aca87" containerName="registry-server" containerID="cri-o://c969e94f5e0c82a8d68573ba15091540737457c6a260c78fb603deb0c68bd273" gracePeriod=2 Feb 23 00:22:12 crc kubenswrapper[4953]: I0223 00:22:12.345259 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5qr9h" Feb 23 00:22:12 crc kubenswrapper[4953]: I0223 00:22:12.543436 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0292b0c8-63d4-4a48-843d-8f7f955aca87-catalog-content\") pod \"0292b0c8-63d4-4a48-843d-8f7f955aca87\" (UID: \"0292b0c8-63d4-4a48-843d-8f7f955aca87\") " Feb 23 00:22:12 crc kubenswrapper[4953]: I0223 00:22:12.543596 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwdcc\" (UniqueName: \"kubernetes.io/projected/0292b0c8-63d4-4a48-843d-8f7f955aca87-kube-api-access-zwdcc\") pod \"0292b0c8-63d4-4a48-843d-8f7f955aca87\" (UID: \"0292b0c8-63d4-4a48-843d-8f7f955aca87\") " Feb 23 00:22:12 crc kubenswrapper[4953]: I0223 00:22:12.544825 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0292b0c8-63d4-4a48-843d-8f7f955aca87-utilities\") pod \"0292b0c8-63d4-4a48-843d-8f7f955aca87\" (UID: \"0292b0c8-63d4-4a48-843d-8f7f955aca87\") " Feb 23 00:22:12 crc kubenswrapper[4953]: I0223 00:22:12.545812 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0292b0c8-63d4-4a48-843d-8f7f955aca87-utilities" (OuterVolumeSpecName: "utilities") pod "0292b0c8-63d4-4a48-843d-8f7f955aca87" (UID: "0292b0c8-63d4-4a48-843d-8f7f955aca87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:22:12 crc kubenswrapper[4953]: I0223 00:22:12.549726 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0292b0c8-63d4-4a48-843d-8f7f955aca87-kube-api-access-zwdcc" (OuterVolumeSpecName: "kube-api-access-zwdcc") pod "0292b0c8-63d4-4a48-843d-8f7f955aca87" (UID: "0292b0c8-63d4-4a48-843d-8f7f955aca87"). InnerVolumeSpecName "kube-api-access-zwdcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:22:12 crc kubenswrapper[4953]: I0223 00:22:12.601828 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0292b0c8-63d4-4a48-843d-8f7f955aca87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0292b0c8-63d4-4a48-843d-8f7f955aca87" (UID: "0292b0c8-63d4-4a48-843d-8f7f955aca87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:22:12 crc kubenswrapper[4953]: I0223 00:22:12.647180 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0292b0c8-63d4-4a48-843d-8f7f955aca87-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 00:22:12 crc kubenswrapper[4953]: I0223 00:22:12.647352 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwdcc\" (UniqueName: \"kubernetes.io/projected/0292b0c8-63d4-4a48-843d-8f7f955aca87-kube-api-access-zwdcc\") on node \"crc\" DevicePath \"\"" Feb 23 00:22:12 crc kubenswrapper[4953]: I0223 00:22:12.647435 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0292b0c8-63d4-4a48-843d-8f7f955aca87-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 00:22:12 crc kubenswrapper[4953]: I0223 00:22:12.935574 4953 generic.go:334] "Generic (PLEG): container finished" podID="0292b0c8-63d4-4a48-843d-8f7f955aca87" containerID="c969e94f5e0c82a8d68573ba15091540737457c6a260c78fb603deb0c68bd273" exitCode=0 Feb 23 00:22:12 crc kubenswrapper[4953]: I0223 00:22:12.935643 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qr9h" event={"ID":"0292b0c8-63d4-4a48-843d-8f7f955aca87","Type":"ContainerDied","Data":"c969e94f5e0c82a8d68573ba15091540737457c6a260c78fb603deb0c68bd273"} Feb 23 00:22:12 crc kubenswrapper[4953]: I0223 00:22:12.935654 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5qr9h" Feb 23 00:22:12 crc kubenswrapper[4953]: I0223 00:22:12.935690 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qr9h" event={"ID":"0292b0c8-63d4-4a48-843d-8f7f955aca87","Type":"ContainerDied","Data":"70d62e975c79f6e9899f928ba797b1cf8a5e0cf05f300597aef8b33812dc523b"} Feb 23 00:22:12 crc kubenswrapper[4953]: I0223 00:22:12.935718 4953 scope.go:117] "RemoveContainer" containerID="c969e94f5e0c82a8d68573ba15091540737457c6a260c78fb603deb0c68bd273" Feb 23 00:22:12 crc kubenswrapper[4953]: I0223 00:22:12.963883 4953 scope.go:117] "RemoveContainer" containerID="3e2180373ed67a3bee6a834b61a9e79e982554514728416eeafbfb9a0174d923" Feb 23 00:22:12 crc kubenswrapper[4953]: I0223 00:22:12.980707 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5qr9h"] Feb 23 00:22:12 crc kubenswrapper[4953]: I0223 00:22:12.986493 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5qr9h"] Feb 23 00:22:13 crc kubenswrapper[4953]: I0223 00:22:13.002612 4953 scope.go:117] "RemoveContainer" containerID="d82919395d8395fbed0ddf6f8e08f604385aaca67d8afd0b4705ca40578c56e8" Feb 23 00:22:13 crc kubenswrapper[4953]: I0223 00:22:13.024680 4953 scope.go:117] "RemoveContainer" containerID="c969e94f5e0c82a8d68573ba15091540737457c6a260c78fb603deb0c68bd273" Feb 23 00:22:13 crc kubenswrapper[4953]: E0223 00:22:13.025249 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c969e94f5e0c82a8d68573ba15091540737457c6a260c78fb603deb0c68bd273\": container with ID starting with c969e94f5e0c82a8d68573ba15091540737457c6a260c78fb603deb0c68bd273 not found: ID does not exist" containerID="c969e94f5e0c82a8d68573ba15091540737457c6a260c78fb603deb0c68bd273" Feb 23 00:22:13 crc kubenswrapper[4953]: I0223 00:22:13.025313 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c969e94f5e0c82a8d68573ba15091540737457c6a260c78fb603deb0c68bd273"} err="failed to get container status \"c969e94f5e0c82a8d68573ba15091540737457c6a260c78fb603deb0c68bd273\": rpc error: code = NotFound desc = could not find container \"c969e94f5e0c82a8d68573ba15091540737457c6a260c78fb603deb0c68bd273\": container with ID starting with c969e94f5e0c82a8d68573ba15091540737457c6a260c78fb603deb0c68bd273 not found: ID does not exist" Feb 23 00:22:13 crc kubenswrapper[4953]: I0223 00:22:13.025342 4953 scope.go:117] "RemoveContainer" containerID="3e2180373ed67a3bee6a834b61a9e79e982554514728416eeafbfb9a0174d923" Feb 23 00:22:13 crc kubenswrapper[4953]: E0223 00:22:13.026421 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e2180373ed67a3bee6a834b61a9e79e982554514728416eeafbfb9a0174d923\": container with ID starting with 3e2180373ed67a3bee6a834b61a9e79e982554514728416eeafbfb9a0174d923 not found: ID does not exist" containerID="3e2180373ed67a3bee6a834b61a9e79e982554514728416eeafbfb9a0174d923" Feb 23 00:22:13 crc kubenswrapper[4953]: I0223 00:22:13.026523 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e2180373ed67a3bee6a834b61a9e79e982554514728416eeafbfb9a0174d923"} err="failed to get container status \"3e2180373ed67a3bee6a834b61a9e79e982554514728416eeafbfb9a0174d923\": rpc error: code = NotFound desc = could not find container \"3e2180373ed67a3bee6a834b61a9e79e982554514728416eeafbfb9a0174d923\": container with ID starting with 3e2180373ed67a3bee6a834b61a9e79e982554514728416eeafbfb9a0174d923 not found: ID does not exist" Feb 23 00:22:13 crc kubenswrapper[4953]: I0223 00:22:13.026633 4953 scope.go:117] "RemoveContainer" containerID="d82919395d8395fbed0ddf6f8e08f604385aaca67d8afd0b4705ca40578c56e8" Feb 23 00:22:13 crc kubenswrapper[4953]: E0223 00:22:13.027040 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d82919395d8395fbed0ddf6f8e08f604385aaca67d8afd0b4705ca40578c56e8\": container with ID starting with d82919395d8395fbed0ddf6f8e08f604385aaca67d8afd0b4705ca40578c56e8 not found: ID does not exist" containerID="d82919395d8395fbed0ddf6f8e08f604385aaca67d8afd0b4705ca40578c56e8" Feb 23 00:22:13 crc kubenswrapper[4953]: I0223 00:22:13.027121 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d82919395d8395fbed0ddf6f8e08f604385aaca67d8afd0b4705ca40578c56e8"} err="failed to get container status \"d82919395d8395fbed0ddf6f8e08f604385aaca67d8afd0b4705ca40578c56e8\": rpc error: code = NotFound desc = could not find container \"d82919395d8395fbed0ddf6f8e08f604385aaca67d8afd0b4705ca40578c56e8\": container with ID starting with d82919395d8395fbed0ddf6f8e08f604385aaca67d8afd0b4705ca40578c56e8 not found: ID does not exist" Feb 23 00:22:13 crc kubenswrapper[4953]: I0223 00:22:13.339915 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0292b0c8-63d4-4a48-843d-8f7f955aca87" path="/var/lib/kubelet/pods/0292b0c8-63d4-4a48-843d-8f7f955aca87/volumes" Feb 23 00:22:18 crc kubenswrapper[4953]: E0223 00:22:18.329384 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:22:20 crc kubenswrapper[4953]: I0223 00:22:20.935921 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dmq2w"] Feb 23 00:22:20 crc kubenswrapper[4953]: E0223 00:22:20.942072 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce80ac0f-af7e-4342-bf0f-bdc2f04032d3" containerName="extract-utilities" Feb 23 00:22:20 crc kubenswrapper[4953]: I0223 00:22:20.942127 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce80ac0f-af7e-4342-bf0f-bdc2f04032d3" containerName="extract-utilities" Feb 23 00:22:20 crc kubenswrapper[4953]: E0223 00:22:20.942185 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce80ac0f-af7e-4342-bf0f-bdc2f04032d3" containerName="registry-server" Feb 23 00:22:20 crc kubenswrapper[4953]: I0223 00:22:20.942199 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce80ac0f-af7e-4342-bf0f-bdc2f04032d3" containerName="registry-server" Feb 23 00:22:20 crc kubenswrapper[4953]: E0223 00:22:20.942235 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce80ac0f-af7e-4342-bf0f-bdc2f04032d3" containerName="extract-content" Feb 23 00:22:20 crc kubenswrapper[4953]: I0223 00:22:20.942246 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce80ac0f-af7e-4342-bf0f-bdc2f04032d3" containerName="extract-content" Feb 23 00:22:20 crc kubenswrapper[4953]: E0223 00:22:20.942304 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0292b0c8-63d4-4a48-843d-8f7f955aca87" containerName="registry-server" Feb 23 00:22:20 crc kubenswrapper[4953]: I0223 00:22:20.942316 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="0292b0c8-63d4-4a48-843d-8f7f955aca87" containerName="registry-server" Feb 23 00:22:20 crc kubenswrapper[4953]: E0223 00:22:20.942346 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0292b0c8-63d4-4a48-843d-8f7f955aca87" containerName="extract-content" Feb 23 00:22:20 crc kubenswrapper[4953]: I0223 00:22:20.942359 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="0292b0c8-63d4-4a48-843d-8f7f955aca87" containerName="extract-content" Feb 23 00:22:20 crc kubenswrapper[4953]: E0223 00:22:20.942373 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0292b0c8-63d4-4a48-843d-8f7f955aca87" containerName="extract-utilities" Feb 23 00:22:20 crc kubenswrapper[4953]: I0223 00:22:20.942398 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="0292b0c8-63d4-4a48-843d-8f7f955aca87" containerName="extract-utilities" Feb 23 00:22:20 crc kubenswrapper[4953]: I0223 00:22:20.943060 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="0292b0c8-63d4-4a48-843d-8f7f955aca87" containerName="registry-server" Feb 23 00:22:20 crc kubenswrapper[4953]: I0223 00:22:20.943100 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce80ac0f-af7e-4342-bf0f-bdc2f04032d3" containerName="registry-server" Feb 23 00:22:20 crc kubenswrapper[4953]: I0223 00:22:20.945537 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmq2w" Feb 23 00:22:20 crc kubenswrapper[4953]: I0223 00:22:20.980985 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dmq2w"] Feb 23 00:22:21 crc kubenswrapper[4953]: I0223 00:22:21.076904 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-294l7\" (UniqueName: \"kubernetes.io/projected/15ad725c-19dd-4619-9576-f9c6fe0a53f3-kube-api-access-294l7\") pod \"community-operators-dmq2w\" (UID: \"15ad725c-19dd-4619-9576-f9c6fe0a53f3\") " pod="openshift-marketplace/community-operators-dmq2w" Feb 23 00:22:21 crc kubenswrapper[4953]: I0223 00:22:21.077457 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ad725c-19dd-4619-9576-f9c6fe0a53f3-utilities\") pod \"community-operators-dmq2w\" (UID: \"15ad725c-19dd-4619-9576-f9c6fe0a53f3\") " pod="openshift-marketplace/community-operators-dmq2w" Feb 23 00:22:21 crc kubenswrapper[4953]: I0223 00:22:21.077527 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ad725c-19dd-4619-9576-f9c6fe0a53f3-catalog-content\") pod \"community-operators-dmq2w\" (UID: \"15ad725c-19dd-4619-9576-f9c6fe0a53f3\") " pod="openshift-marketplace/community-operators-dmq2w" Feb 23 00:22:21 crc kubenswrapper[4953]: I0223 00:22:21.179187 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-294l7\" (UniqueName: \"kubernetes.io/projected/15ad725c-19dd-4619-9576-f9c6fe0a53f3-kube-api-access-294l7\") pod \"community-operators-dmq2w\" (UID: \"15ad725c-19dd-4619-9576-f9c6fe0a53f3\") " pod="openshift-marketplace/community-operators-dmq2w" Feb 23 00:22:21 crc kubenswrapper[4953]: I0223 00:22:21.179251 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ad725c-19dd-4619-9576-f9c6fe0a53f3-utilities\") pod \"community-operators-dmq2w\" (UID: \"15ad725c-19dd-4619-9576-f9c6fe0a53f3\") " pod="openshift-marketplace/community-operators-dmq2w" Feb 23 00:22:21 crc kubenswrapper[4953]: I0223 00:22:21.179317 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ad725c-19dd-4619-9576-f9c6fe0a53f3-catalog-content\") pod \"community-operators-dmq2w\" (UID: \"15ad725c-19dd-4619-9576-f9c6fe0a53f3\") " pod="openshift-marketplace/community-operators-dmq2w" Feb 23 00:22:21 crc kubenswrapper[4953]: I0223 00:22:21.179868 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ad725c-19dd-4619-9576-f9c6fe0a53f3-catalog-content\") pod \"community-operators-dmq2w\" (UID: \"15ad725c-19dd-4619-9576-f9c6fe0a53f3\") " pod="openshift-marketplace/community-operators-dmq2w" Feb 23 00:22:21 crc kubenswrapper[4953]: I0223 00:22:21.179869 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ad725c-19dd-4619-9576-f9c6fe0a53f3-utilities\") pod \"community-operators-dmq2w\" (UID: \"15ad725c-19dd-4619-9576-f9c6fe0a53f3\") " pod="openshift-marketplace/community-operators-dmq2w" Feb 23 00:22:21 crc kubenswrapper[4953]: I0223 00:22:21.201614 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-294l7\" (UniqueName: \"kubernetes.io/projected/15ad725c-19dd-4619-9576-f9c6fe0a53f3-kube-api-access-294l7\") pod \"community-operators-dmq2w\" (UID: \"15ad725c-19dd-4619-9576-f9c6fe0a53f3\") " pod="openshift-marketplace/community-operators-dmq2w" Feb 23 00:22:21 crc kubenswrapper[4953]: I0223 00:22:21.277509 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmq2w" Feb 23 00:22:21 crc kubenswrapper[4953]: I0223 00:22:21.767447 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dmq2w"] Feb 23 00:22:21 crc kubenswrapper[4953]: I0223 00:22:21.998465 4953 generic.go:334] "Generic (PLEG): container finished" podID="15ad725c-19dd-4619-9576-f9c6fe0a53f3" containerID="e979dcaf38beb97778ff04533091d845277ef7042b09cbef459cb1fdeab21e23" exitCode=0 Feb 23 00:22:21 crc kubenswrapper[4953]: I0223 00:22:21.998566 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmq2w" event={"ID":"15ad725c-19dd-4619-9576-f9c6fe0a53f3","Type":"ContainerDied","Data":"e979dcaf38beb97778ff04533091d845277ef7042b09cbef459cb1fdeab21e23"} Feb 23 00:22:21 crc kubenswrapper[4953]: I0223 00:22:21.998810 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmq2w" event={"ID":"15ad725c-19dd-4619-9576-f9c6fe0a53f3","Type":"ContainerStarted","Data":"9fd10e3e4f9e8257f8c3c5f999ba79f77915dabe7f3f1e2e235d542bb546179a"} Feb 23 00:22:23 crc kubenswrapper[4953]: I0223 00:22:23.006853 4953 generic.go:334] "Generic (PLEG): container finished" podID="15ad725c-19dd-4619-9576-f9c6fe0a53f3" containerID="4e2d07a9f02525cd63efd94c3c5c95b71eb3d2a57b15cbeaf050d040e0cfc5cb" exitCode=0 Feb 23 00:22:23 crc kubenswrapper[4953]: I0223 00:22:23.006904 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmq2w" event={"ID":"15ad725c-19dd-4619-9576-f9c6fe0a53f3","Type":"ContainerDied","Data":"4e2d07a9f02525cd63efd94c3c5c95b71eb3d2a57b15cbeaf050d040e0cfc5cb"} Feb 23 00:22:24 crc kubenswrapper[4953]: I0223 00:22:24.022112 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmq2w" event={"ID":"15ad725c-19dd-4619-9576-f9c6fe0a53f3","Type":"ContainerStarted","Data":"2320248b3cd45717facb8907dbea5165e1bf6a1bcd1f37bd844fb06f4e778faf"} Feb 23 00:22:24 crc kubenswrapper[4953]: I0223 00:22:24.046698 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dmq2w" podStartSLOduration=2.587831725 podStartE2EDuration="4.046676341s" podCreationTimestamp="2026-02-23 00:22:20 +0000 UTC" firstStartedPulling="2026-02-23 00:22:22.000642301 +0000 UTC m=+939.934484147" lastFinishedPulling="2026-02-23 00:22:23.459486907 +0000 UTC m=+941.393328763" observedRunningTime="2026-02-23 00:22:24.042764468 +0000 UTC m=+941.976606354" watchObservedRunningTime="2026-02-23 00:22:24.046676341 +0000 UTC m=+941.980518187" Feb 23 00:22:31 crc kubenswrapper[4953]: I0223 00:22:31.277672 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dmq2w" Feb 23 00:22:31 crc kubenswrapper[4953]: I0223 00:22:31.278404 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dmq2w" Feb 23 00:22:31 crc kubenswrapper[4953]: I0223 00:22:31.367807 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dmq2w" Feb 23 00:22:32 crc kubenswrapper[4953]: I0223 00:22:32.147965 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dmq2w" Feb 23 00:22:32 crc kubenswrapper[4953]: I0223 00:22:32.195214 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dmq2w"] Feb 23 00:22:33 crc kubenswrapper[4953]: E0223 00:22:33.334737 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:22:34 crc kubenswrapper[4953]: I0223 00:22:34.091497 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dmq2w" podUID="15ad725c-19dd-4619-9576-f9c6fe0a53f3" containerName="registry-server" containerID="cri-o://2320248b3cd45717facb8907dbea5165e1bf6a1bcd1f37bd844fb06f4e778faf" gracePeriod=2 Feb 23 00:22:34 crc kubenswrapper[4953]: I0223 00:22:34.454028 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmq2w" Feb 23 00:22:34 crc kubenswrapper[4953]: I0223 00:22:34.577570 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-294l7\" (UniqueName: \"kubernetes.io/projected/15ad725c-19dd-4619-9576-f9c6fe0a53f3-kube-api-access-294l7\") pod \"15ad725c-19dd-4619-9576-f9c6fe0a53f3\" (UID: \"15ad725c-19dd-4619-9576-f9c6fe0a53f3\") " Feb 23 00:22:34 crc kubenswrapper[4953]: I0223 00:22:34.577881 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ad725c-19dd-4619-9576-f9c6fe0a53f3-utilities\") pod \"15ad725c-19dd-4619-9576-f9c6fe0a53f3\" (UID: \"15ad725c-19dd-4619-9576-f9c6fe0a53f3\") " Feb 23 00:22:34 crc kubenswrapper[4953]: I0223 00:22:34.577926 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ad725c-19dd-4619-9576-f9c6fe0a53f3-catalog-content\") pod \"15ad725c-19dd-4619-9576-f9c6fe0a53f3\" (UID: \"15ad725c-19dd-4619-9576-f9c6fe0a53f3\") " Feb 23 00:22:34 crc kubenswrapper[4953]: I0223 00:22:34.578782 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15ad725c-19dd-4619-9576-f9c6fe0a53f3-utilities" (OuterVolumeSpecName: "utilities") pod "15ad725c-19dd-4619-9576-f9c6fe0a53f3" (UID: "15ad725c-19dd-4619-9576-f9c6fe0a53f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:22:34 crc kubenswrapper[4953]: I0223 00:22:34.580524 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ad725c-19dd-4619-9576-f9c6fe0a53f3-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 00:22:34 crc kubenswrapper[4953]: I0223 00:22:34.586527 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15ad725c-19dd-4619-9576-f9c6fe0a53f3-kube-api-access-294l7" (OuterVolumeSpecName: "kube-api-access-294l7") pod "15ad725c-19dd-4619-9576-f9c6fe0a53f3" (UID: "15ad725c-19dd-4619-9576-f9c6fe0a53f3"). InnerVolumeSpecName "kube-api-access-294l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:22:34 crc kubenswrapper[4953]: I0223 00:22:34.624764 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15ad725c-19dd-4619-9576-f9c6fe0a53f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15ad725c-19dd-4619-9576-f9c6fe0a53f3" (UID: "15ad725c-19dd-4619-9576-f9c6fe0a53f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:22:34 crc kubenswrapper[4953]: I0223 00:22:34.682679 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-294l7\" (UniqueName: \"kubernetes.io/projected/15ad725c-19dd-4619-9576-f9c6fe0a53f3-kube-api-access-294l7\") on node \"crc\" DevicePath \"\"" Feb 23 00:22:34 crc kubenswrapper[4953]: I0223 00:22:34.682731 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ad725c-19dd-4619-9576-f9c6fe0a53f3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 00:22:35 crc kubenswrapper[4953]: I0223 00:22:35.102895 4953 generic.go:334] "Generic (PLEG): container finished" podID="15ad725c-19dd-4619-9576-f9c6fe0a53f3" containerID="2320248b3cd45717facb8907dbea5165e1bf6a1bcd1f37bd844fb06f4e778faf" exitCode=0 Feb 23 00:22:35 crc kubenswrapper[4953]: I0223 00:22:35.102953 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmq2w" Feb 23 00:22:35 crc kubenswrapper[4953]: I0223 00:22:35.102966 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmq2w" event={"ID":"15ad725c-19dd-4619-9576-f9c6fe0a53f3","Type":"ContainerDied","Data":"2320248b3cd45717facb8907dbea5165e1bf6a1bcd1f37bd844fb06f4e778faf"} Feb 23 00:22:35 crc kubenswrapper[4953]: I0223 00:22:35.103018 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmq2w" event={"ID":"15ad725c-19dd-4619-9576-f9c6fe0a53f3","Type":"ContainerDied","Data":"9fd10e3e4f9e8257f8c3c5f999ba79f77915dabe7f3f1e2e235d542bb546179a"} Feb 23 00:22:35 crc kubenswrapper[4953]: I0223 00:22:35.103038 4953 scope.go:117] "RemoveContainer" containerID="2320248b3cd45717facb8907dbea5165e1bf6a1bcd1f37bd844fb06f4e778faf" Feb 23 00:22:35 crc kubenswrapper[4953]: I0223 00:22:35.127557 4953 scope.go:117] "RemoveContainer" containerID="4e2d07a9f02525cd63efd94c3c5c95b71eb3d2a57b15cbeaf050d040e0cfc5cb" Feb 23 00:22:35 crc kubenswrapper[4953]: I0223 00:22:35.144415 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dmq2w"] Feb 23 00:22:35 crc kubenswrapper[4953]: I0223 00:22:35.148471 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dmq2w"] Feb 23 00:22:35 crc kubenswrapper[4953]: I0223 00:22:35.151019 4953 scope.go:117] "RemoveContainer" containerID="e979dcaf38beb97778ff04533091d845277ef7042b09cbef459cb1fdeab21e23" Feb 23 00:22:35 crc kubenswrapper[4953]: I0223 00:22:35.172558 4953 scope.go:117] "RemoveContainer" containerID="2320248b3cd45717facb8907dbea5165e1bf6a1bcd1f37bd844fb06f4e778faf" Feb 23 00:22:35 crc kubenswrapper[4953]: E0223 00:22:35.173181 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2320248b3cd45717facb8907dbea5165e1bf6a1bcd1f37bd844fb06f4e778faf\": container with ID starting with 2320248b3cd45717facb8907dbea5165e1bf6a1bcd1f37bd844fb06f4e778faf not found: ID does not exist" containerID="2320248b3cd45717facb8907dbea5165e1bf6a1bcd1f37bd844fb06f4e778faf" Feb 23 00:22:35 crc kubenswrapper[4953]: I0223 00:22:35.173234 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2320248b3cd45717facb8907dbea5165e1bf6a1bcd1f37bd844fb06f4e778faf"} err="failed to get container status \"2320248b3cd45717facb8907dbea5165e1bf6a1bcd1f37bd844fb06f4e778faf\": rpc error: code = NotFound desc = could not find container \"2320248b3cd45717facb8907dbea5165e1bf6a1bcd1f37bd844fb06f4e778faf\": container with ID starting with 2320248b3cd45717facb8907dbea5165e1bf6a1bcd1f37bd844fb06f4e778faf not found: ID does not exist" Feb 23 00:22:35 crc kubenswrapper[4953]: I0223 00:22:35.173281 4953 scope.go:117] "RemoveContainer" containerID="4e2d07a9f02525cd63efd94c3c5c95b71eb3d2a57b15cbeaf050d040e0cfc5cb" Feb 23 00:22:35 crc kubenswrapper[4953]: E0223 00:22:35.173801 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e2d07a9f02525cd63efd94c3c5c95b71eb3d2a57b15cbeaf050d040e0cfc5cb\": container with ID starting with 4e2d07a9f02525cd63efd94c3c5c95b71eb3d2a57b15cbeaf050d040e0cfc5cb not found: ID does not exist" containerID="4e2d07a9f02525cd63efd94c3c5c95b71eb3d2a57b15cbeaf050d040e0cfc5cb" Feb 23 00:22:35 crc kubenswrapper[4953]: I0223 00:22:35.173857 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2d07a9f02525cd63efd94c3c5c95b71eb3d2a57b15cbeaf050d040e0cfc5cb"} err="failed to get container status \"4e2d07a9f02525cd63efd94c3c5c95b71eb3d2a57b15cbeaf050d040e0cfc5cb\": rpc error: code = NotFound desc = could not find container \"4e2d07a9f02525cd63efd94c3c5c95b71eb3d2a57b15cbeaf050d040e0cfc5cb\": container with ID starting with 4e2d07a9f02525cd63efd94c3c5c95b71eb3d2a57b15cbeaf050d040e0cfc5cb not found: ID does not exist" Feb 23 00:22:35 crc kubenswrapper[4953]: I0223 00:22:35.173878 4953 scope.go:117] "RemoveContainer" containerID="e979dcaf38beb97778ff04533091d845277ef7042b09cbef459cb1fdeab21e23" Feb 23 00:22:35 crc kubenswrapper[4953]: E0223 00:22:35.174142 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e979dcaf38beb97778ff04533091d845277ef7042b09cbef459cb1fdeab21e23\": container with ID starting with e979dcaf38beb97778ff04533091d845277ef7042b09cbef459cb1fdeab21e23 not found: ID does not exist" containerID="e979dcaf38beb97778ff04533091d845277ef7042b09cbef459cb1fdeab21e23" Feb 23 00:22:35 crc kubenswrapper[4953]: I0223 00:22:35.174164 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e979dcaf38beb97778ff04533091d845277ef7042b09cbef459cb1fdeab21e23"} err="failed to get container status \"e979dcaf38beb97778ff04533091d845277ef7042b09cbef459cb1fdeab21e23\": rpc error: code = NotFound desc = could not find container \"e979dcaf38beb97778ff04533091d845277ef7042b09cbef459cb1fdeab21e23\": container with ID starting with e979dcaf38beb97778ff04533091d845277ef7042b09cbef459cb1fdeab21e23 not found: ID does not exist" Feb 23 00:22:35 crc kubenswrapper[4953]: I0223 00:22:35.333199 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15ad725c-19dd-4619-9576-f9c6fe0a53f3" path="/var/lib/kubelet/pods/15ad725c-19dd-4619-9576-f9c6fe0a53f3/volumes" Feb 23 00:22:47 crc kubenswrapper[4953]: E0223 00:22:47.332862 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:22:59 crc kubenswrapper[4953]: E0223 00:22:59.329531 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:23:10 crc kubenswrapper[4953]: E0223 00:23:10.328797 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:23:14 crc kubenswrapper[4953]: I0223 00:23:14.700012 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:23:14 crc kubenswrapper[4953]: I0223 00:23:14.700490 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:23:23 crc kubenswrapper[4953]: E0223 00:23:23.335877 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:23:35 crc kubenswrapper[4953]: E0223 00:23:35.328788 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:23:43 crc kubenswrapper[4953]: I0223 00:23:43.664403 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-hwwhn"] Feb 23 00:23:43 crc kubenswrapper[4953]: E0223 00:23:43.671181 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ad725c-19dd-4619-9576-f9c6fe0a53f3" containerName="extract-utilities" Feb 23 00:23:43 crc kubenswrapper[4953]: I0223 00:23:43.671214 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ad725c-19dd-4619-9576-f9c6fe0a53f3" containerName="extract-utilities" Feb 23 00:23:43 crc kubenswrapper[4953]: E0223 00:23:43.671247 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ad725c-19dd-4619-9576-f9c6fe0a53f3" containerName="extract-content" Feb 23 00:23:43 crc kubenswrapper[4953]: I0223 00:23:43.671257 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ad725c-19dd-4619-9576-f9c6fe0a53f3" containerName="extract-content" Feb 23 00:23:43 crc kubenswrapper[4953]: E0223 00:23:43.671268 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ad725c-19dd-4619-9576-f9c6fe0a53f3" containerName="registry-server" Feb 23 00:23:43 crc kubenswrapper[4953]: I0223 00:23:43.671276 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ad725c-19dd-4619-9576-f9c6fe0a53f3" containerName="registry-server" Feb 23 00:23:43 crc kubenswrapper[4953]: I0223 00:23:43.671470 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ad725c-19dd-4619-9576-f9c6fe0a53f3" containerName="registry-server" Feb 23 00:23:43 crc kubenswrapper[4953]: I0223 00:23:43.672210 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-hwwhn" Feb 23 00:23:43 crc kubenswrapper[4953]: I0223 00:23:43.676023 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-hwwhn"] Feb 23 00:23:43 crc kubenswrapper[4953]: I0223 00:23:43.741774 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnzrf\" (UniqueName: \"kubernetes.io/projected/7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81-kube-api-access-mnzrf\") pod \"infrawatch-operators-hwwhn\" (UID: \"7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81\") " pod="service-telemetry/infrawatch-operators-hwwhn" Feb 23 00:23:43 crc kubenswrapper[4953]: I0223 00:23:43.844167 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnzrf\" (UniqueName: \"kubernetes.io/projected/7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81-kube-api-access-mnzrf\") pod \"infrawatch-operators-hwwhn\" (UID: \"7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81\") " pod="service-telemetry/infrawatch-operators-hwwhn" Feb 23 00:23:43 crc kubenswrapper[4953]: I0223 00:23:43.867054 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnzrf\" (UniqueName: \"kubernetes.io/projected/7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81-kube-api-access-mnzrf\") pod \"infrawatch-operators-hwwhn\" (UID: \"7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81\") " pod="service-telemetry/infrawatch-operators-hwwhn" Feb 23 00:23:43 crc kubenswrapper[4953]: I0223 00:23:43.995596 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-hwwhn" Feb 23 00:23:44 crc kubenswrapper[4953]: I0223 00:23:44.312781 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-hwwhn"] Feb 23 00:23:44 crc kubenswrapper[4953]: W0223 00:23:44.322746 4953 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7082ef46_4c7d_4d6d_b0c2_c6d341bfbc81.slice/crio-7043f4020314c6a620ce65c4887ced7dd63b39e3fe4884acd994443cdf169d08 WatchSource:0}: Error finding container 7043f4020314c6a620ce65c4887ced7dd63b39e3fe4884acd994443cdf169d08: Status 404 returned error can't find the container with id 7043f4020314c6a620ce65c4887ced7dd63b39e3fe4884acd994443cdf169d08 Feb 23 00:23:44 crc kubenswrapper[4953]: E0223 00:23:44.378371 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 23 00:23:44 crc kubenswrapper[4953]: E0223 00:23:44.378607 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mnzrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-hwwhn_service-telemetry(7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 23 00:23:44 crc kubenswrapper[4953]: E0223 00:23:44.379794 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:23:44 crc kubenswrapper[4953]: I0223 00:23:44.666169 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-hwwhn" event={"ID":"7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81","Type":"ContainerStarted","Data":"7043f4020314c6a620ce65c4887ced7dd63b39e3fe4884acd994443cdf169d08"} Feb 23 00:23:44 crc kubenswrapper[4953]: E0223 00:23:44.670205 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:23:44 crc kubenswrapper[4953]: I0223 00:23:44.699812 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:23:44 crc kubenswrapper[4953]: I0223 00:23:44.699890 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:23:45 crc kubenswrapper[4953]: E0223 00:23:45.674742 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:23:49 crc kubenswrapper[4953]: E0223 00:23:49.327965 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:23:59 crc kubenswrapper[4953]: E0223 00:23:59.368994 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 23 00:23:59 crc kubenswrapper[4953]: E0223 00:23:59.370551 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mnzrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-hwwhn_service-telemetry(7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 23 00:23:59 crc kubenswrapper[4953]: E0223 00:23:59.372456 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:24:00 crc kubenswrapper[4953]: E0223 00:24:00.328815 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:24:12 crc kubenswrapper[4953]: E0223 00:24:12.331167 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:24:14 crc kubenswrapper[4953]: E0223 00:24:14.328418 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:24:14 crc kubenswrapper[4953]: I0223 00:24:14.707280 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:24:14 crc kubenswrapper[4953]: I0223 00:24:14.707406 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:24:14 crc kubenswrapper[4953]: I0223 00:24:14.707490 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:24:14 crc kubenswrapper[4953]: I0223 00:24:14.708391 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b9977e31f9f0b146ca16d9707802b2ffd298c08bbaca105db28ec645ce6eb46"} pod="openshift-machine-config-operator/machine-config-daemon-gpl86" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 00:24:14 crc kubenswrapper[4953]: I0223 00:24:14.708459 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" containerID="cri-o://0b9977e31f9f0b146ca16d9707802b2ffd298c08bbaca105db28ec645ce6eb46" gracePeriod=600 Feb 23 00:24:14 crc kubenswrapper[4953]: I0223 00:24:14.895017 4953 generic.go:334] "Generic (PLEG): container finished" podID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerID="0b9977e31f9f0b146ca16d9707802b2ffd298c08bbaca105db28ec645ce6eb46" exitCode=0 Feb 23 00:24:14 crc kubenswrapper[4953]: I0223 00:24:14.895082 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" event={"ID":"fca7afa5-1274-436d-ab61-9e8796e4774c","Type":"ContainerDied","Data":"0b9977e31f9f0b146ca16d9707802b2ffd298c08bbaca105db28ec645ce6eb46"} Feb 23 00:24:14 crc kubenswrapper[4953]: I0223 00:24:14.895149 4953 scope.go:117] "RemoveContainer" containerID="f49e743a8b3d2741657144ee2c2b4859d85b42b2b42be220c9172980ca78223a" Feb 23 00:24:15 crc kubenswrapper[4953]: I0223 00:24:15.906473 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" event={"ID":"fca7afa5-1274-436d-ab61-9e8796e4774c","Type":"ContainerStarted","Data":"4cfc4cd04a2e7ca8ba01180ccbb3ec7950d02c765bba527d8c7bd8d00c7320eb"} Feb 23 00:24:23 crc kubenswrapper[4953]: E0223 00:24:23.398684 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 23 00:24:23 crc kubenswrapper[4953]: E0223 00:24:23.399693 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mnzrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-hwwhn_service-telemetry(7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 23 00:24:23 crc kubenswrapper[4953]: E0223 00:24:23.401042 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:24:26 crc kubenswrapper[4953]: E0223 00:24:26.361889 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 23 00:24:26 crc kubenswrapper[4953]: E0223 00:24:26.362507 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nc9kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-szrd7_service-telemetry(fcc5fd41-349f-4826-8b86-e7aef82e8f7b): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 23 00:24:26 crc kubenswrapper[4953]: E0223 00:24:26.363732 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:24:35 crc kubenswrapper[4953]: E0223 00:24:35.328230 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:24:41 crc kubenswrapper[4953]: E0223 00:24:41.332422 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:24:48 crc kubenswrapper[4953]: E0223 00:24:48.329910 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:24:53 crc kubenswrapper[4953]: E0223 00:24:53.340527 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:25:00 crc kubenswrapper[4953]: E0223 00:25:00.328255 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:25:05 crc kubenswrapper[4953]: E0223 00:25:05.331071 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:25:14 crc kubenswrapper[4953]: E0223 00:25:14.378325 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 23 00:25:14 crc kubenswrapper[4953]: E0223 00:25:14.378672 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mnzrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-hwwhn_service-telemetry(7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 23 00:25:14 crc kubenswrapper[4953]: E0223 00:25:14.379951 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:25:17 crc kubenswrapper[4953]: E0223 00:25:17.331190 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:25:26 crc kubenswrapper[4953]: E0223 00:25:26.328578 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:25:29 crc kubenswrapper[4953]: E0223 00:25:29.330573 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:25:41 crc kubenswrapper[4953]: E0223 00:25:41.329237 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:25:43 crc kubenswrapper[4953]: E0223 00:25:43.328821 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:25:54 crc kubenswrapper[4953]: E0223 00:25:54.329830 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:25:58 crc kubenswrapper[4953]: E0223 00:25:58.328992 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:26:06 crc kubenswrapper[4953]: E0223 00:26:06.328091 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:26:13 crc kubenswrapper[4953]: E0223 00:26:13.335574 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:26:21 crc kubenswrapper[4953]: E0223 00:26:21.331562 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:26:27 crc kubenswrapper[4953]: E0223 00:26:27.330370 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:26:36 crc kubenswrapper[4953]: E0223 00:26:36.381936 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 23 00:26:36 crc kubenswrapper[4953]: E0223 00:26:36.383224 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mnzrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-hwwhn_service-telemetry(7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 23 00:26:36 crc kubenswrapper[4953]: E0223 00:26:36.384543 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:26:42 crc kubenswrapper[4953]: E0223 00:26:42.328015 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:26:44 crc kubenswrapper[4953]: I0223 00:26:44.700547 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:26:44 crc kubenswrapper[4953]: I0223 00:26:44.700944 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:26:51 crc kubenswrapper[4953]: E0223 00:26:51.330091 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:26:55 crc kubenswrapper[4953]: E0223 00:26:55.328984 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:27:03 crc kubenswrapper[4953]: E0223 00:27:03.334917 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:27:10 crc kubenswrapper[4953]: E0223 00:27:10.328670 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:27:14 crc kubenswrapper[4953]: I0223 00:27:14.700376 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:27:14 crc kubenswrapper[4953]: I0223 00:27:14.701021 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:27:17 crc kubenswrapper[4953]: E0223 00:27:17.331381 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:27:25 crc kubenswrapper[4953]: E0223 00:27:25.331181 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:27:32 crc kubenswrapper[4953]: E0223 00:27:32.330692 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:27:39 crc kubenswrapper[4953]: E0223 00:27:39.330620 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:27:44 crc kubenswrapper[4953]: I0223 00:27:44.700172 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:27:44 crc kubenswrapper[4953]: I0223 00:27:44.700732 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:27:44 crc kubenswrapper[4953]: I0223 00:27:44.700793 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:27:44 crc kubenswrapper[4953]: I0223 00:27:44.701395 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4cfc4cd04a2e7ca8ba01180ccbb3ec7950d02c765bba527d8c7bd8d00c7320eb"} pod="openshift-machine-config-operator/machine-config-daemon-gpl86" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 00:27:44 crc kubenswrapper[4953]: I0223 00:27:44.701465 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" containerID="cri-o://4cfc4cd04a2e7ca8ba01180ccbb3ec7950d02c765bba527d8c7bd8d00c7320eb" gracePeriod=600 Feb 23 00:27:45 crc kubenswrapper[4953]: I0223 00:27:45.689008 4953 generic.go:334] "Generic (PLEG): container finished" podID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerID="4cfc4cd04a2e7ca8ba01180ccbb3ec7950d02c765bba527d8c7bd8d00c7320eb" exitCode=0 Feb 23 00:27:45 crc kubenswrapper[4953]: I0223 00:27:45.689071 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" event={"ID":"fca7afa5-1274-436d-ab61-9e8796e4774c","Type":"ContainerDied","Data":"4cfc4cd04a2e7ca8ba01180ccbb3ec7950d02c765bba527d8c7bd8d00c7320eb"} Feb 23 00:27:45 crc kubenswrapper[4953]: I0223 00:27:45.689763 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" event={"ID":"fca7afa5-1274-436d-ab61-9e8796e4774c","Type":"ContainerStarted","Data":"7bbb1432a118683a73efc7d5fa3c37ea9fc9b879091deb6f1bba6e886fc0c71f"} Feb 23 00:27:45 crc kubenswrapper[4953]: I0223 00:27:45.689807 4953 scope.go:117] "RemoveContainer" containerID="0b9977e31f9f0b146ca16d9707802b2ffd298c08bbaca105db28ec645ce6eb46" Feb 23 00:27:46 crc kubenswrapper[4953]: E0223 00:27:46.328280 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:27:51 crc kubenswrapper[4953]: E0223 00:27:51.331743 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:28:00 crc kubenswrapper[4953]: E0223 00:28:00.329746 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:28:05 crc kubenswrapper[4953]: E0223 00:28:05.329064 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:28:12 crc kubenswrapper[4953]: E0223 00:28:12.329674 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:28:20 crc kubenswrapper[4953]: E0223 00:28:20.329895 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:28:23 crc kubenswrapper[4953]: E0223 00:28:23.336776 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:28:32 crc kubenswrapper[4953]: E0223 00:28:32.329979 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:28:38 crc kubenswrapper[4953]: E0223 00:28:38.329423 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:28:43 crc kubenswrapper[4953]: E0223 00:28:43.337811 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:28:51 crc kubenswrapper[4953]: E0223 00:28:51.329366 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:28:54 crc kubenswrapper[4953]: E0223 00:28:54.328634 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:29:02 crc kubenswrapper[4953]: E0223 00:29:02.376974 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:29:05 crc kubenswrapper[4953]: E0223 00:29:05.331793 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:29:14 crc kubenswrapper[4953]: E0223 00:29:14.331623 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:29:19 crc kubenswrapper[4953]: E0223 00:29:19.337714 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:29:26 crc kubenswrapper[4953]: I0223 00:29:26.329965 4953 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 00:29:26 crc kubenswrapper[4953]: E0223 00:29:26.425427 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 23 00:29:26 crc kubenswrapper[4953]: E0223 00:29:26.425732 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mnzrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-hwwhn_service-telemetry(7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 23 00:29:26 crc kubenswrapper[4953]: E0223 00:29:26.427096 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:29:32 crc kubenswrapper[4953]: E0223 00:29:32.372587 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 23 00:29:32 crc kubenswrapper[4953]: E0223 00:29:32.373696 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nc9kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-szrd7_service-telemetry(fcc5fd41-349f-4826-8b86-e7aef82e8f7b): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 23 00:29:32 crc kubenswrapper[4953]: E0223 00:29:32.375028 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:29:41 crc kubenswrapper[4953]: E0223 00:29:41.329392 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:29:42 crc kubenswrapper[4953]: I0223 00:29:42.401323 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2px74/must-gather-5b84w"] Feb 23 00:29:42 crc kubenswrapper[4953]: I0223 00:29:42.404363 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2px74/must-gather-5b84w" Feb 23 00:29:42 crc kubenswrapper[4953]: I0223 00:29:42.409684 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2px74"/"openshift-service-ca.crt" Feb 23 00:29:42 crc kubenswrapper[4953]: I0223 00:29:42.410082 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-2px74"/"default-dockercfg-wrfgc" Feb 23 00:29:42 crc kubenswrapper[4953]: I0223 00:29:42.420639 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2px74/must-gather-5b84w"] Feb 23 00:29:42 crc kubenswrapper[4953]: I0223 00:29:42.427272 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2px74"/"kube-root-ca.crt" Feb 23 00:29:42 crc kubenswrapper[4953]: I0223 00:29:42.481128 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9tss\" (UniqueName: \"kubernetes.io/projected/38bc69a6-4156-4dc4-bcd3-cfaee3010fe7-kube-api-access-k9tss\") pod \"must-gather-5b84w\" (UID: \"38bc69a6-4156-4dc4-bcd3-cfaee3010fe7\") " pod="openshift-must-gather-2px74/must-gather-5b84w" Feb 23 00:29:42 crc kubenswrapper[4953]: I0223 00:29:42.481279 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/38bc69a6-4156-4dc4-bcd3-cfaee3010fe7-must-gather-output\") pod \"must-gather-5b84w\" (UID: \"38bc69a6-4156-4dc4-bcd3-cfaee3010fe7\") " pod="openshift-must-gather-2px74/must-gather-5b84w" Feb 23 00:29:42 crc kubenswrapper[4953]: I0223 00:29:42.583624 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/38bc69a6-4156-4dc4-bcd3-cfaee3010fe7-must-gather-output\") pod \"must-gather-5b84w\" (UID: \"38bc69a6-4156-4dc4-bcd3-cfaee3010fe7\") " pod="openshift-must-gather-2px74/must-gather-5b84w" Feb 23 00:29:42 crc kubenswrapper[4953]: I0223 00:29:42.583762 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9tss\" (UniqueName: \"kubernetes.io/projected/38bc69a6-4156-4dc4-bcd3-cfaee3010fe7-kube-api-access-k9tss\") pod \"must-gather-5b84w\" (UID: \"38bc69a6-4156-4dc4-bcd3-cfaee3010fe7\") " pod="openshift-must-gather-2px74/must-gather-5b84w" Feb 23 00:29:42 crc kubenswrapper[4953]: I0223 00:29:42.584200 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/38bc69a6-4156-4dc4-bcd3-cfaee3010fe7-must-gather-output\") pod \"must-gather-5b84w\" (UID: \"38bc69a6-4156-4dc4-bcd3-cfaee3010fe7\") " pod="openshift-must-gather-2px74/must-gather-5b84w" Feb 23 00:29:42 crc kubenswrapper[4953]: I0223 00:29:42.607681 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9tss\" (UniqueName: \"kubernetes.io/projected/38bc69a6-4156-4dc4-bcd3-cfaee3010fe7-kube-api-access-k9tss\") pod \"must-gather-5b84w\" (UID: \"38bc69a6-4156-4dc4-bcd3-cfaee3010fe7\") " pod="openshift-must-gather-2px74/must-gather-5b84w" Feb 23 00:29:42 crc kubenswrapper[4953]: I0223 00:29:42.844242 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2px74/must-gather-5b84w" Feb 23 00:29:43 crc kubenswrapper[4953]: I0223 00:29:43.112908 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2px74/must-gather-5b84w"] Feb 23 00:29:43 crc kubenswrapper[4953]: I0223 00:29:43.863025 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2px74/must-gather-5b84w" event={"ID":"38bc69a6-4156-4dc4-bcd3-cfaee3010fe7","Type":"ContainerStarted","Data":"a692fecf5377d8acf01926ff4a4f8fa593f810defff4ec8faf9f93bc3a5915fa"} Feb 23 00:29:44 crc kubenswrapper[4953]: E0223 00:29:44.328584 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:29:50 crc kubenswrapper[4953]: I0223 00:29:50.926063 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2px74/must-gather-5b84w" event={"ID":"38bc69a6-4156-4dc4-bcd3-cfaee3010fe7","Type":"ContainerStarted","Data":"0e906cfc7b5d4a8eea4de13b287a2c07c5583ac51237c2a5410fcad69a7becef"} Feb 23 00:29:50 crc kubenswrapper[4953]: I0223 00:29:50.926583 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2px74/must-gather-5b84w" event={"ID":"38bc69a6-4156-4dc4-bcd3-cfaee3010fe7","Type":"ContainerStarted","Data":"8f23c1d775a134e6de61ad400d98e6b5e513343679f52ab3bd5fb051afa37447"} Feb 23 00:29:50 crc kubenswrapper[4953]: I0223 00:29:50.950560 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2px74/must-gather-5b84w" podStartSLOduration=2.361204769 podStartE2EDuration="8.950535054s" podCreationTimestamp="2026-02-23 00:29:42 +0000 UTC" firstStartedPulling="2026-02-23 00:29:43.128091181 +0000 UTC m=+1381.061933027" lastFinishedPulling="2026-02-23 00:29:49.717421466 +0000 UTC m=+1387.651263312" observedRunningTime="2026-02-23 00:29:50.948116111 +0000 UTC m=+1388.881957957" watchObservedRunningTime="2026-02-23 00:29:50.950535054 +0000 UTC m=+1388.884376900" Feb 23 00:29:55 crc kubenswrapper[4953]: E0223 00:29:55.329156 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:29:59 crc kubenswrapper[4953]: E0223 00:29:59.330396 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:30:00 crc kubenswrapper[4953]: I0223 00:30:00.148807 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx"] Feb 23 00:30:00 crc kubenswrapper[4953]: I0223 00:30:00.149789 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx" Feb 23 00:30:00 crc kubenswrapper[4953]: I0223 00:30:00.152854 4953 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 00:30:00 crc kubenswrapper[4953]: I0223 00:30:00.153498 4953 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 00:30:00 crc kubenswrapper[4953]: I0223 00:30:00.173671 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx"] Feb 23 00:30:00 crc kubenswrapper[4953]: I0223 00:30:00.237019 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13119f3a-6c70-4c3b-936c-7da4af1d8524-config-volume\") pod \"collect-profiles-29530110-kdkjx\" (UID: \"13119f3a-6c70-4c3b-936c-7da4af1d8524\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx" Feb 23 00:30:00 crc kubenswrapper[4953]: I0223 00:30:00.237114 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13119f3a-6c70-4c3b-936c-7da4af1d8524-secret-volume\") pod \"collect-profiles-29530110-kdkjx\" (UID: \"13119f3a-6c70-4c3b-936c-7da4af1d8524\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx" Feb 23 00:30:00 crc kubenswrapper[4953]: I0223 00:30:00.237155 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjpxt\" (UniqueName: \"kubernetes.io/projected/13119f3a-6c70-4c3b-936c-7da4af1d8524-kube-api-access-gjpxt\") pod \"collect-profiles-29530110-kdkjx\" (UID: \"13119f3a-6c70-4c3b-936c-7da4af1d8524\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx" Feb 23 00:30:00 crc kubenswrapper[4953]: I0223 00:30:00.338340 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjpxt\" (UniqueName: \"kubernetes.io/projected/13119f3a-6c70-4c3b-936c-7da4af1d8524-kube-api-access-gjpxt\") pod \"collect-profiles-29530110-kdkjx\" (UID: \"13119f3a-6c70-4c3b-936c-7da4af1d8524\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx" Feb 23 00:30:00 crc kubenswrapper[4953]: I0223 00:30:00.338419 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13119f3a-6c70-4c3b-936c-7da4af1d8524-config-volume\") pod \"collect-profiles-29530110-kdkjx\" (UID: \"13119f3a-6c70-4c3b-936c-7da4af1d8524\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx" Feb 23 00:30:00 crc kubenswrapper[4953]: I0223 00:30:00.338471 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13119f3a-6c70-4c3b-936c-7da4af1d8524-secret-volume\") pod \"collect-profiles-29530110-kdkjx\" (UID: \"13119f3a-6c70-4c3b-936c-7da4af1d8524\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx" Feb 23 00:30:00 crc kubenswrapper[4953]: I0223 00:30:00.340393 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13119f3a-6c70-4c3b-936c-7da4af1d8524-config-volume\") pod \"collect-profiles-29530110-kdkjx\" (UID: \"13119f3a-6c70-4c3b-936c-7da4af1d8524\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx" Feb 23 00:30:00 crc kubenswrapper[4953]: I0223 00:30:00.351256 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13119f3a-6c70-4c3b-936c-7da4af1d8524-secret-volume\") pod \"collect-profiles-29530110-kdkjx\" (UID: \"13119f3a-6c70-4c3b-936c-7da4af1d8524\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx" Feb 23 00:30:00 crc kubenswrapper[4953]: I0223 00:30:00.363047 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjpxt\" (UniqueName: \"kubernetes.io/projected/13119f3a-6c70-4c3b-936c-7da4af1d8524-kube-api-access-gjpxt\") pod \"collect-profiles-29530110-kdkjx\" (UID: \"13119f3a-6c70-4c3b-936c-7da4af1d8524\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx" Feb 23 00:30:00 crc kubenswrapper[4953]: I0223 00:30:00.485545 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx" Feb 23 00:30:00 crc kubenswrapper[4953]: I0223 00:30:00.696530 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx"] Feb 23 00:30:01 crc kubenswrapper[4953]: I0223 00:30:01.007210 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx" event={"ID":"13119f3a-6c70-4c3b-936c-7da4af1d8524","Type":"ContainerStarted","Data":"13925a843c7a753ac6f4a9a93a218d828857b45dfad2dc292ef61de7950d62dd"} Feb 23 00:30:01 crc kubenswrapper[4953]: I0223 00:30:01.007530 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx" event={"ID":"13119f3a-6c70-4c3b-936c-7da4af1d8524","Type":"ContainerStarted","Data":"d59b9a1b12c46341a977fefb89577611ad61873b375e8880b15c02d23faf9e76"} Feb 23 00:30:02 crc kubenswrapper[4953]: I0223 00:30:02.018345 4953 generic.go:334] "Generic (PLEG): container finished" podID="13119f3a-6c70-4c3b-936c-7da4af1d8524" containerID="13925a843c7a753ac6f4a9a93a218d828857b45dfad2dc292ef61de7950d62dd" exitCode=0 Feb 23 00:30:02 crc kubenswrapper[4953]: I0223 00:30:02.018408 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx" event={"ID":"13119f3a-6c70-4c3b-936c-7da4af1d8524","Type":"ContainerDied","Data":"13925a843c7a753ac6f4a9a93a218d828857b45dfad2dc292ef61de7950d62dd"} Feb 23 00:30:03 crc kubenswrapper[4953]: I0223 00:30:03.359168 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx" Feb 23 00:30:03 crc kubenswrapper[4953]: I0223 00:30:03.385048 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13119f3a-6c70-4c3b-936c-7da4af1d8524-secret-volume\") pod \"13119f3a-6c70-4c3b-936c-7da4af1d8524\" (UID: \"13119f3a-6c70-4c3b-936c-7da4af1d8524\") " Feb 23 00:30:03 crc kubenswrapper[4953]: I0223 00:30:03.385106 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjpxt\" (UniqueName: \"kubernetes.io/projected/13119f3a-6c70-4c3b-936c-7da4af1d8524-kube-api-access-gjpxt\") pod \"13119f3a-6c70-4c3b-936c-7da4af1d8524\" (UID: \"13119f3a-6c70-4c3b-936c-7da4af1d8524\") " Feb 23 00:30:03 crc kubenswrapper[4953]: I0223 00:30:03.385274 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13119f3a-6c70-4c3b-936c-7da4af1d8524-config-volume\") pod \"13119f3a-6c70-4c3b-936c-7da4af1d8524\" (UID: \"13119f3a-6c70-4c3b-936c-7da4af1d8524\") " Feb 23 00:30:03 crc kubenswrapper[4953]: I0223 00:30:03.386091 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13119f3a-6c70-4c3b-936c-7da4af1d8524-config-volume" (OuterVolumeSpecName: "config-volume") pod "13119f3a-6c70-4c3b-936c-7da4af1d8524" (UID: "13119f3a-6c70-4c3b-936c-7da4af1d8524"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 00:30:03 crc kubenswrapper[4953]: I0223 00:30:03.409488 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13119f3a-6c70-4c3b-936c-7da4af1d8524-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "13119f3a-6c70-4c3b-936c-7da4af1d8524" (UID: "13119f3a-6c70-4c3b-936c-7da4af1d8524"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 00:30:03 crc kubenswrapper[4953]: I0223 00:30:03.419556 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13119f3a-6c70-4c3b-936c-7da4af1d8524-kube-api-access-gjpxt" (OuterVolumeSpecName: "kube-api-access-gjpxt") pod "13119f3a-6c70-4c3b-936c-7da4af1d8524" (UID: "13119f3a-6c70-4c3b-936c-7da4af1d8524"). InnerVolumeSpecName "kube-api-access-gjpxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:30:03 crc kubenswrapper[4953]: I0223 00:30:03.491478 4953 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13119f3a-6c70-4c3b-936c-7da4af1d8524-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 00:30:03 crc kubenswrapper[4953]: I0223 00:30:03.491547 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjpxt\" (UniqueName: \"kubernetes.io/projected/13119f3a-6c70-4c3b-936c-7da4af1d8524-kube-api-access-gjpxt\") on node \"crc\" DevicePath \"\"" Feb 23 00:30:03 crc kubenswrapper[4953]: I0223 00:30:03.491564 4953 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13119f3a-6c70-4c3b-936c-7da4af1d8524-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 00:30:04 crc kubenswrapper[4953]: I0223 00:30:04.032592 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx" event={"ID":"13119f3a-6c70-4c3b-936c-7da4af1d8524","Type":"ContainerDied","Data":"d59b9a1b12c46341a977fefb89577611ad61873b375e8880b15c02d23faf9e76"} Feb 23 00:30:04 crc kubenswrapper[4953]: I0223 00:30:04.032952 4953 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d59b9a1b12c46341a977fefb89577611ad61873b375e8880b15c02d23faf9e76" Feb 23 00:30:04 crc kubenswrapper[4953]: I0223 00:30:04.032643 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530110-kdkjx" Feb 23 00:30:09 crc kubenswrapper[4953]: E0223 00:30:09.328973 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:30:11 crc kubenswrapper[4953]: E0223 00:30:11.329939 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:30:14 crc kubenswrapper[4953]: I0223 00:30:14.700462 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:30:14 crc kubenswrapper[4953]: I0223 00:30:14.700607 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:30:23 crc kubenswrapper[4953]: E0223 00:30:23.337173 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:30:23 crc kubenswrapper[4953]: E0223 00:30:23.338731 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:30:35 crc kubenswrapper[4953]: E0223 00:30:35.330863 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:30:35 crc kubenswrapper[4953]: E0223 00:30:35.331311 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:30:40 crc kubenswrapper[4953]: I0223 00:30:40.176861 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-pjp8z_6b7576be-af0b-4553-bc96-125e87709ad1/control-plane-machine-set-operator/0.log" Feb 23 00:30:40 crc kubenswrapper[4953]: I0223 00:30:40.341762 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-flc99_f6643e75-78f7-40fc-b597-d37fa9381727/kube-rbac-proxy/0.log" Feb 23 00:30:40 crc kubenswrapper[4953]: I0223 00:30:40.406084 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-flc99_f6643e75-78f7-40fc-b597-d37fa9381727/machine-api-operator/0.log" Feb 23 00:30:44 crc kubenswrapper[4953]: I0223 00:30:44.701618 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:30:44 crc kubenswrapper[4953]: I0223 00:30:44.702252 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:30:47 crc kubenswrapper[4953]: E0223 00:30:47.332120 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:30:48 crc kubenswrapper[4953]: E0223 00:30:48.327958 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:30:54 crc kubenswrapper[4953]: I0223 00:30:54.488375 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-zvds4_7f21ba8e-cf20-4d5b-97ee-70ba7583a380/cert-manager-controller/0.log" Feb 23 00:30:54 crc kubenswrapper[4953]: I0223 00:30:54.676450 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-z2kb8_5f1a348d-9fbc-45fc-9308-00a3201cc0c7/cert-manager-webhook/0.log" Feb 23 00:30:54 crc kubenswrapper[4953]: I0223 00:30:54.685304 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-8tcxx_9f3fa8cc-7b5d-4e59-824d-8247391c15d7/cert-manager-cainjector/0.log" Feb 23 00:31:01 crc kubenswrapper[4953]: E0223 00:31:01.329413 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:31:01 crc kubenswrapper[4953]: E0223 00:31:01.329531 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:31:10 crc kubenswrapper[4953]: I0223 00:31:10.484208 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-p9mgz_bde89fe3-f774-4c9b-924d-fccad8941098/prometheus-operator/0.log" Feb 23 00:31:10 crc kubenswrapper[4953]: I0223 00:31:10.645605 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc_ae05751c-98c6-4129-8206-148d9553e542/prometheus-operator-admission-webhook/0.log" Feb 23 00:31:10 crc kubenswrapper[4953]: I0223 00:31:10.685190 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-758bb7fb84-bll92_063a0b62-c9b0-4730-9485-ecd85781d17a/prometheus-operator-admission-webhook/0.log" Feb 23 00:31:10 crc kubenswrapper[4953]: I0223 00:31:10.837746 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-6gqst_7bf9b63e-1b5b-4063-a9e9-3619753fc50e/operator/0.log" Feb 23 00:31:10 crc kubenswrapper[4953]: I0223 00:31:10.881034 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-x7skn_6166ab70-4311-4eeb-a162-a48aa002f5f1/perses-operator/0.log" Feb 23 00:31:12 crc kubenswrapper[4953]: E0223 00:31:12.330937 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:31:14 crc kubenswrapper[4953]: I0223 00:31:14.699982 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:31:14 crc kubenswrapper[4953]: I0223 00:31:14.700530 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:31:14 crc kubenswrapper[4953]: I0223 00:31:14.700596 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:31:14 crc kubenswrapper[4953]: I0223 00:31:14.701427 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7bbb1432a118683a73efc7d5fa3c37ea9fc9b879091deb6f1bba6e886fc0c71f"} pod="openshift-machine-config-operator/machine-config-daemon-gpl86" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 00:31:14 crc kubenswrapper[4953]: I0223 00:31:14.701502 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" containerID="cri-o://7bbb1432a118683a73efc7d5fa3c37ea9fc9b879091deb6f1bba6e886fc0c71f" gracePeriod=600 Feb 23 00:31:15 crc kubenswrapper[4953]: E0223 00:31:15.333823 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:31:15 crc kubenswrapper[4953]: I0223 00:31:15.610997 4953 generic.go:334] "Generic (PLEG): container finished" podID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerID="7bbb1432a118683a73efc7d5fa3c37ea9fc9b879091deb6f1bba6e886fc0c71f" exitCode=0 Feb 23 00:31:15 crc kubenswrapper[4953]: I0223 00:31:15.611132 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" event={"ID":"fca7afa5-1274-436d-ab61-9e8796e4774c","Type":"ContainerDied","Data":"7bbb1432a118683a73efc7d5fa3c37ea9fc9b879091deb6f1bba6e886fc0c71f"} Feb 23 00:31:15 crc kubenswrapper[4953]: I0223 00:31:15.611213 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" event={"ID":"fca7afa5-1274-436d-ab61-9e8796e4774c","Type":"ContainerStarted","Data":"d423795e6d3a507cb818ec0c46541705d42f1d4dd4b8b8af87432826a5a4b168"} Feb 23 00:31:15 crc kubenswrapper[4953]: I0223 00:31:15.611243 4953 scope.go:117] "RemoveContainer" containerID="4cfc4cd04a2e7ca8ba01180ccbb3ec7950d02c765bba527d8c7bd8d00c7320eb" Feb 23 00:31:23 crc kubenswrapper[4953]: E0223 00:31:23.331903 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:31:26 crc kubenswrapper[4953]: I0223 00:31:26.727601 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5_61b8140f-0a3b-401c-ac40-92def4a3f617/util/0.log" Feb 23 00:31:26 crc kubenswrapper[4953]: I0223 00:31:26.918866 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5_61b8140f-0a3b-401c-ac40-92def4a3f617/util/0.log" Feb 23 00:31:26 crc kubenswrapper[4953]: I0223 00:31:26.947619 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5_61b8140f-0a3b-401c-ac40-92def4a3f617/pull/0.log" Feb 23 00:31:26 crc kubenswrapper[4953]: I0223 00:31:26.976202 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5_61b8140f-0a3b-401c-ac40-92def4a3f617/pull/0.log" Feb 23 00:31:27 crc kubenswrapper[4953]: I0223 00:31:27.135642 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5_61b8140f-0a3b-401c-ac40-92def4a3f617/pull/0.log" Feb 23 00:31:27 crc kubenswrapper[4953]: I0223 00:31:27.153225 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5_61b8140f-0a3b-401c-ac40-92def4a3f617/util/0.log" Feb 23 00:31:27 crc kubenswrapper[4953]: I0223 00:31:27.153957 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12q9c5_61b8140f-0a3b-401c-ac40-92def4a3f617/extract/0.log" Feb 23 00:31:27 crc kubenswrapper[4953]: I0223 00:31:27.300514 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h_9b5a2242-d62e-40fd-be0d-80648962c8d8/util/0.log" Feb 23 00:31:27 crc kubenswrapper[4953]: E0223 00:31:27.330352 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:31:27 crc kubenswrapper[4953]: I0223 00:31:27.554381 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h_9b5a2242-d62e-40fd-be0d-80648962c8d8/util/0.log" Feb 23 00:31:27 crc kubenswrapper[4953]: I0223 00:31:27.570985 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h_9b5a2242-d62e-40fd-be0d-80648962c8d8/pull/0.log" Feb 23 00:31:27 crc kubenswrapper[4953]: I0223 00:31:27.572805 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h_9b5a2242-d62e-40fd-be0d-80648962c8d8/pull/0.log" Feb 23 00:31:27 crc kubenswrapper[4953]: I0223 00:31:27.739326 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h_9b5a2242-d62e-40fd-be0d-80648962c8d8/util/0.log" Feb 23 00:31:27 crc kubenswrapper[4953]: I0223 00:31:27.766303 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h_9b5a2242-d62e-40fd-be0d-80648962c8d8/extract/0.log" Feb 23 00:31:27 crc kubenswrapper[4953]: I0223 00:31:27.769666 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hgc9h_9b5a2242-d62e-40fd-be0d-80648962c8d8/pull/0.log" Feb 23 00:31:27 crc kubenswrapper[4953]: I0223 00:31:27.938769 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh_64bb377e-103a-40e7-a37f-de41160c7a61/util/0.log" Feb 23 00:31:28 crc kubenswrapper[4953]: I0223 00:31:28.154307 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh_64bb377e-103a-40e7-a37f-de41160c7a61/util/0.log" Feb 23 00:31:28 crc kubenswrapper[4953]: I0223 00:31:28.155538 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh_64bb377e-103a-40e7-a37f-de41160c7a61/pull/0.log" Feb 23 00:31:28 crc kubenswrapper[4953]: I0223 00:31:28.170720 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh_64bb377e-103a-40e7-a37f-de41160c7a61/pull/0.log" Feb 23 00:31:28 crc kubenswrapper[4953]: I0223 00:31:28.354253 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh_64bb377e-103a-40e7-a37f-de41160c7a61/util/0.log" Feb 23 00:31:28 crc kubenswrapper[4953]: I0223 00:31:28.369039 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh_64bb377e-103a-40e7-a37f-de41160c7a61/pull/0.log" Feb 23 00:31:28 crc kubenswrapper[4953]: I0223 00:31:28.396130 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pjdxh_64bb377e-103a-40e7-a37f-de41160c7a61/extract/0.log" Feb 23 00:31:28 crc kubenswrapper[4953]: I0223 00:31:28.575550 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tccw9_4f8e5178-695c-48c9-a34b-98b5a9659111/extract-utilities/0.log" Feb 23 00:31:28 crc kubenswrapper[4953]: I0223 00:31:28.775243 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tccw9_4f8e5178-695c-48c9-a34b-98b5a9659111/extract-content/0.log" Feb 23 00:31:28 crc kubenswrapper[4953]: I0223 00:31:28.788399 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tccw9_4f8e5178-695c-48c9-a34b-98b5a9659111/extract-utilities/0.log" Feb 23 00:31:28 crc kubenswrapper[4953]: I0223 00:31:28.808751 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tccw9_4f8e5178-695c-48c9-a34b-98b5a9659111/extract-content/0.log" Feb 23 00:31:28 crc kubenswrapper[4953]: I0223 00:31:28.947917 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tccw9_4f8e5178-695c-48c9-a34b-98b5a9659111/extract-utilities/0.log" Feb 23 00:31:28 crc kubenswrapper[4953]: I0223 00:31:28.965816 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tccw9_4f8e5178-695c-48c9-a34b-98b5a9659111/extract-content/0.log" Feb 23 00:31:29 crc kubenswrapper[4953]: I0223 00:31:29.217740 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tccw9_4f8e5178-695c-48c9-a34b-98b5a9659111/registry-server/0.log" Feb 23 00:31:29 crc kubenswrapper[4953]: I0223 00:31:29.242122 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2g5l7_d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc/extract-utilities/0.log" Feb 23 00:31:29 crc kubenswrapper[4953]: I0223 00:31:29.454164 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2g5l7_d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc/extract-content/0.log" Feb 23 00:31:29 crc kubenswrapper[4953]: I0223 00:31:29.486188 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2g5l7_d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc/extract-utilities/0.log" Feb 23 00:31:29 crc kubenswrapper[4953]: I0223 00:31:29.487237 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2g5l7_d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc/extract-content/0.log" Feb 23 00:31:29 crc kubenswrapper[4953]: I0223 00:31:29.651485 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2g5l7_d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc/extract-content/0.log" Feb 23 00:31:29 crc kubenswrapper[4953]: I0223 00:31:29.696945 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2g5l7_d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc/extract-utilities/0.log" Feb 23 00:31:29 crc kubenswrapper[4953]: I0223 00:31:29.888431 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-xn8m6_e6fcf4dd-f162-4b92-82b2-98bf669fd3f2/marketplace-operator/0.log" Feb 23 00:31:30 crc kubenswrapper[4953]: I0223 00:31:30.027257 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2g5l7_d2e714e5-50c5-4a8f-a500-68eb0bc4f5fc/registry-server/0.log" Feb 23 00:31:30 crc kubenswrapper[4953]: I0223 00:31:30.050254 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvj5x_7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7/extract-utilities/0.log" Feb 23 00:31:30 crc kubenswrapper[4953]: I0223 00:31:30.159448 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvj5x_7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7/extract-utilities/0.log" Feb 23 00:31:30 crc kubenswrapper[4953]: I0223 00:31:30.193803 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvj5x_7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7/extract-content/0.log" Feb 23 00:31:30 crc kubenswrapper[4953]: I0223 00:31:30.232936 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvj5x_7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7/extract-content/0.log" Feb 23 00:31:30 crc kubenswrapper[4953]: I0223 00:31:30.456480 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvj5x_7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7/extract-utilities/0.log" Feb 23 00:31:30 crc kubenswrapper[4953]: I0223 00:31:30.475250 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvj5x_7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7/extract-content/0.log" Feb 23 00:31:30 crc kubenswrapper[4953]: I0223 00:31:30.706510 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvj5x_7be3ebf1-fa87-4f28-95d5-a4a0e09b3dc7/registry-server/0.log" Feb 23 00:31:36 crc kubenswrapper[4953]: E0223 00:31:36.329632 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:31:41 crc kubenswrapper[4953]: E0223 00:31:41.330229 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:31:44 crc kubenswrapper[4953]: I0223 00:31:44.771478 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-p9mgz_bde89fe3-f774-4c9b-924d-fccad8941098/prometheus-operator/0.log" Feb 23 00:31:44 crc kubenswrapper[4953]: I0223 00:31:44.810151 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-758bb7fb84-bll92_063a0b62-c9b0-4730-9485-ecd85781d17a/prometheus-operator-admission-webhook/0.log" Feb 23 00:31:44 crc kubenswrapper[4953]: I0223 00:31:44.812215 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-758bb7fb84-8gtwc_ae05751c-98c6-4129-8206-148d9553e542/prometheus-operator-admission-webhook/0.log" Feb 23 00:31:44 crc kubenswrapper[4953]: I0223 00:31:44.949183 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-6gqst_7bf9b63e-1b5b-4063-a9e9-3619753fc50e/operator/0.log" Feb 23 00:31:44 crc kubenswrapper[4953]: I0223 00:31:44.993954 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-x7skn_6166ab70-4311-4eeb-a162-a48aa002f5f1/perses-operator/0.log" Feb 23 00:31:51 crc kubenswrapper[4953]: E0223 00:31:51.330451 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:31:54 crc kubenswrapper[4953]: E0223 00:31:54.331073 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:32:02 crc kubenswrapper[4953]: E0223 00:32:02.329848 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:32:06 crc kubenswrapper[4953]: E0223 00:32:06.330000 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:32:13 crc kubenswrapper[4953]: I0223 00:32:13.749906 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6s5jg"] Feb 23 00:32:13 crc kubenswrapper[4953]: E0223 00:32:13.751264 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13119f3a-6c70-4c3b-936c-7da4af1d8524" containerName="collect-profiles" Feb 23 00:32:13 crc kubenswrapper[4953]: I0223 00:32:13.751280 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="13119f3a-6c70-4c3b-936c-7da4af1d8524" containerName="collect-profiles" Feb 23 00:32:13 crc kubenswrapper[4953]: I0223 00:32:13.751460 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="13119f3a-6c70-4c3b-936c-7da4af1d8524" containerName="collect-profiles" Feb 23 00:32:13 crc kubenswrapper[4953]: I0223 00:32:13.752487 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6s5jg" Feb 23 00:32:13 crc kubenswrapper[4953]: I0223 00:32:13.766670 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6s5jg"] Feb 23 00:32:13 crc kubenswrapper[4953]: I0223 00:32:13.768865 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt6xc\" (UniqueName: \"kubernetes.io/projected/cbb58c59-2b69-4250-990c-05bb920211c7-kube-api-access-xt6xc\") pod \"redhat-operators-6s5jg\" (UID: \"cbb58c59-2b69-4250-990c-05bb920211c7\") " pod="openshift-marketplace/redhat-operators-6s5jg" Feb 23 00:32:13 crc kubenswrapper[4953]: I0223 00:32:13.769000 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbb58c59-2b69-4250-990c-05bb920211c7-utilities\") pod \"redhat-operators-6s5jg\" (UID: \"cbb58c59-2b69-4250-990c-05bb920211c7\") " pod="openshift-marketplace/redhat-operators-6s5jg" Feb 23 00:32:13 crc kubenswrapper[4953]: I0223 00:32:13.769035 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbb58c59-2b69-4250-990c-05bb920211c7-catalog-content\") pod \"redhat-operators-6s5jg\" (UID: \"cbb58c59-2b69-4250-990c-05bb920211c7\") " pod="openshift-marketplace/redhat-operators-6s5jg" Feb 23 00:32:13 crc kubenswrapper[4953]: I0223 00:32:13.869913 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbb58c59-2b69-4250-990c-05bb920211c7-utilities\") pod \"redhat-operators-6s5jg\" (UID: \"cbb58c59-2b69-4250-990c-05bb920211c7\") " pod="openshift-marketplace/redhat-operators-6s5jg" Feb 23 00:32:13 crc kubenswrapper[4953]: I0223 00:32:13.869979 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbb58c59-2b69-4250-990c-05bb920211c7-catalog-content\") pod \"redhat-operators-6s5jg\" (UID: \"cbb58c59-2b69-4250-990c-05bb920211c7\") " pod="openshift-marketplace/redhat-operators-6s5jg" Feb 23 00:32:13 crc kubenswrapper[4953]: I0223 00:32:13.870020 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt6xc\" (UniqueName: \"kubernetes.io/projected/cbb58c59-2b69-4250-990c-05bb920211c7-kube-api-access-xt6xc\") pod \"redhat-operators-6s5jg\" (UID: \"cbb58c59-2b69-4250-990c-05bb920211c7\") " pod="openshift-marketplace/redhat-operators-6s5jg" Feb 23 00:32:13 crc kubenswrapper[4953]: I0223 00:32:13.871014 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbb58c59-2b69-4250-990c-05bb920211c7-utilities\") pod \"redhat-operators-6s5jg\" (UID: \"cbb58c59-2b69-4250-990c-05bb920211c7\") " pod="openshift-marketplace/redhat-operators-6s5jg" Feb 23 00:32:13 crc kubenswrapper[4953]: I0223 00:32:13.871067 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbb58c59-2b69-4250-990c-05bb920211c7-catalog-content\") pod \"redhat-operators-6s5jg\" (UID: \"cbb58c59-2b69-4250-990c-05bb920211c7\") " pod="openshift-marketplace/redhat-operators-6s5jg" Feb 23 00:32:13 crc kubenswrapper[4953]: I0223 00:32:13.894502 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt6xc\" (UniqueName: \"kubernetes.io/projected/cbb58c59-2b69-4250-990c-05bb920211c7-kube-api-access-xt6xc\") pod \"redhat-operators-6s5jg\" (UID: \"cbb58c59-2b69-4250-990c-05bb920211c7\") " pod="openshift-marketplace/redhat-operators-6s5jg" Feb 23 00:32:14 crc kubenswrapper[4953]: I0223 00:32:14.080632 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6s5jg" Feb 23 00:32:14 crc kubenswrapper[4953]: E0223 00:32:14.328551 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:32:14 crc kubenswrapper[4953]: I0223 00:32:14.369960 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6s5jg"] Feb 23 00:32:15 crc kubenswrapper[4953]: I0223 00:32:15.133501 4953 generic.go:334] "Generic (PLEG): container finished" podID="cbb58c59-2b69-4250-990c-05bb920211c7" containerID="d6bcc9181842ea0a53c81d90c808e675518b6e5087cc7c5c52943eb1b396e686" exitCode=0 Feb 23 00:32:15 crc kubenswrapper[4953]: I0223 00:32:15.133621 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6s5jg" event={"ID":"cbb58c59-2b69-4250-990c-05bb920211c7","Type":"ContainerDied","Data":"d6bcc9181842ea0a53c81d90c808e675518b6e5087cc7c5c52943eb1b396e686"} Feb 23 00:32:15 crc kubenswrapper[4953]: I0223 00:32:15.133997 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6s5jg" event={"ID":"cbb58c59-2b69-4250-990c-05bb920211c7","Type":"ContainerStarted","Data":"8b98addc51e7f9fc53107a6528592ec350909503ad691a0eb3d28c07aebaffd3"} Feb 23 00:32:17 crc kubenswrapper[4953]: I0223 00:32:17.154635 4953 generic.go:334] "Generic (PLEG): container finished" podID="cbb58c59-2b69-4250-990c-05bb920211c7" containerID="608088599b75735c3c78000fe17736f2c002b3773c9f85abc5b9e2117d3509bd" exitCode=0 Feb 23 00:32:17 crc kubenswrapper[4953]: I0223 00:32:17.154693 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6s5jg" event={"ID":"cbb58c59-2b69-4250-990c-05bb920211c7","Type":"ContainerDied","Data":"608088599b75735c3c78000fe17736f2c002b3773c9f85abc5b9e2117d3509bd"} Feb 23 00:32:19 crc kubenswrapper[4953]: I0223 00:32:19.176571 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6s5jg" event={"ID":"cbb58c59-2b69-4250-990c-05bb920211c7","Type":"ContainerStarted","Data":"31248d5ba0f8c4c1ed72a831327d9c848dc09040a5702e2e239a0512175c3046"} Feb 23 00:32:19 crc kubenswrapper[4953]: I0223 00:32:19.224127 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6s5jg" podStartSLOduration=2.79018576 podStartE2EDuration="6.224092293s" podCreationTimestamp="2026-02-23 00:32:13 +0000 UTC" firstStartedPulling="2026-02-23 00:32:15.138343474 +0000 UTC m=+1533.072185350" lastFinishedPulling="2026-02-23 00:32:18.572250037 +0000 UTC m=+1536.506091883" observedRunningTime="2026-02-23 00:32:19.205941867 +0000 UTC m=+1537.139783723" watchObservedRunningTime="2026-02-23 00:32:19.224092293 +0000 UTC m=+1537.157934179" Feb 23 00:32:19 crc kubenswrapper[4953]: E0223 00:32:19.332803 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:32:24 crc kubenswrapper[4953]: I0223 00:32:24.081639 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6s5jg" Feb 23 00:32:24 crc kubenswrapper[4953]: I0223 00:32:24.081694 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6s5jg" Feb 23 00:32:25 crc kubenswrapper[4953]: I0223 00:32:25.135514 4953 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6s5jg" podUID="cbb58c59-2b69-4250-990c-05bb920211c7" containerName="registry-server" probeResult="failure" output=< Feb 23 00:32:25 crc kubenswrapper[4953]: timeout: failed to connect service ":50051" within 1s Feb 23 00:32:25 crc kubenswrapper[4953]: > Feb 23 00:32:28 crc kubenswrapper[4953]: E0223 00:32:28.334741 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:32:32 crc kubenswrapper[4953]: E0223 00:32:32.329913 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:32:34 crc kubenswrapper[4953]: I0223 00:32:34.133264 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6s5jg" Feb 23 00:32:34 crc kubenswrapper[4953]: I0223 00:32:34.188176 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6s5jg" Feb 23 00:32:34 crc kubenswrapper[4953]: I0223 00:32:34.387137 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6s5jg"] Feb 23 00:32:35 crc kubenswrapper[4953]: I0223 00:32:35.330349 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6s5jg" podUID="cbb58c59-2b69-4250-990c-05bb920211c7" containerName="registry-server" containerID="cri-o://31248d5ba0f8c4c1ed72a831327d9c848dc09040a5702e2e239a0512175c3046" gracePeriod=2 Feb 23 00:32:35 crc kubenswrapper[4953]: I0223 00:32:35.809624 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6s5jg" Feb 23 00:32:35 crc kubenswrapper[4953]: I0223 00:32:35.945074 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbb58c59-2b69-4250-990c-05bb920211c7-utilities\") pod \"cbb58c59-2b69-4250-990c-05bb920211c7\" (UID: \"cbb58c59-2b69-4250-990c-05bb920211c7\") " Feb 23 00:32:35 crc kubenswrapper[4953]: I0223 00:32:35.945206 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbb58c59-2b69-4250-990c-05bb920211c7-catalog-content\") pod \"cbb58c59-2b69-4250-990c-05bb920211c7\" (UID: \"cbb58c59-2b69-4250-990c-05bb920211c7\") " Feb 23 00:32:35 crc kubenswrapper[4953]: I0223 00:32:35.945384 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt6xc\" (UniqueName: \"kubernetes.io/projected/cbb58c59-2b69-4250-990c-05bb920211c7-kube-api-access-xt6xc\") pod \"cbb58c59-2b69-4250-990c-05bb920211c7\" (UID: \"cbb58c59-2b69-4250-990c-05bb920211c7\") " Feb 23 00:32:35 crc kubenswrapper[4953]: I0223 00:32:35.947618 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbb58c59-2b69-4250-990c-05bb920211c7-utilities" (OuterVolumeSpecName: "utilities") pod "cbb58c59-2b69-4250-990c-05bb920211c7" (UID: "cbb58c59-2b69-4250-990c-05bb920211c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:32:35 crc kubenswrapper[4953]: I0223 00:32:35.964679 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbb58c59-2b69-4250-990c-05bb920211c7-kube-api-access-xt6xc" (OuterVolumeSpecName: "kube-api-access-xt6xc") pod "cbb58c59-2b69-4250-990c-05bb920211c7" (UID: "cbb58c59-2b69-4250-990c-05bb920211c7"). InnerVolumeSpecName "kube-api-access-xt6xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:32:36 crc kubenswrapper[4953]: I0223 00:32:36.046911 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbb58c59-2b69-4250-990c-05bb920211c7-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 00:32:36 crc kubenswrapper[4953]: I0223 00:32:36.046950 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt6xc\" (UniqueName: \"kubernetes.io/projected/cbb58c59-2b69-4250-990c-05bb920211c7-kube-api-access-xt6xc\") on node \"crc\" DevicePath \"\"" Feb 23 00:32:36 crc kubenswrapper[4953]: I0223 00:32:36.151508 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbb58c59-2b69-4250-990c-05bb920211c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbb58c59-2b69-4250-990c-05bb920211c7" (UID: "cbb58c59-2b69-4250-990c-05bb920211c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:32:36 crc kubenswrapper[4953]: I0223 00:32:36.249205 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbb58c59-2b69-4250-990c-05bb920211c7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 00:32:36 crc kubenswrapper[4953]: I0223 00:32:36.345721 4953 generic.go:334] "Generic (PLEG): container finished" podID="cbb58c59-2b69-4250-990c-05bb920211c7" containerID="31248d5ba0f8c4c1ed72a831327d9c848dc09040a5702e2e239a0512175c3046" exitCode=0 Feb 23 00:32:36 crc kubenswrapper[4953]: I0223 00:32:36.345796 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6s5jg" event={"ID":"cbb58c59-2b69-4250-990c-05bb920211c7","Type":"ContainerDied","Data":"31248d5ba0f8c4c1ed72a831327d9c848dc09040a5702e2e239a0512175c3046"} Feb 23 00:32:36 crc kubenswrapper[4953]: I0223 00:32:36.346065 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6s5jg" event={"ID":"cbb58c59-2b69-4250-990c-05bb920211c7","Type":"ContainerDied","Data":"8b98addc51e7f9fc53107a6528592ec350909503ad691a0eb3d28c07aebaffd3"} Feb 23 00:32:36 crc kubenswrapper[4953]: I0223 00:32:36.346073 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6s5jg" Feb 23 00:32:36 crc kubenswrapper[4953]: I0223 00:32:36.346099 4953 scope.go:117] "RemoveContainer" containerID="31248d5ba0f8c4c1ed72a831327d9c848dc09040a5702e2e239a0512175c3046" Feb 23 00:32:36 crc kubenswrapper[4953]: I0223 00:32:36.383863 4953 scope.go:117] "RemoveContainer" containerID="608088599b75735c3c78000fe17736f2c002b3773c9f85abc5b9e2117d3509bd" Feb 23 00:32:36 crc kubenswrapper[4953]: I0223 00:32:36.405836 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6s5jg"] Feb 23 00:32:36 crc kubenswrapper[4953]: I0223 00:32:36.416281 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6s5jg"] Feb 23 00:32:36 crc kubenswrapper[4953]: I0223 00:32:36.427094 4953 scope.go:117] "RemoveContainer" containerID="d6bcc9181842ea0a53c81d90c808e675518b6e5087cc7c5c52943eb1b396e686" Feb 23 00:32:36 crc kubenswrapper[4953]: I0223 00:32:36.447025 4953 scope.go:117] "RemoveContainer" containerID="31248d5ba0f8c4c1ed72a831327d9c848dc09040a5702e2e239a0512175c3046" Feb 23 00:32:36 crc kubenswrapper[4953]: E0223 00:32:36.447713 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31248d5ba0f8c4c1ed72a831327d9c848dc09040a5702e2e239a0512175c3046\": container with ID starting with 31248d5ba0f8c4c1ed72a831327d9c848dc09040a5702e2e239a0512175c3046 not found: ID does not exist" containerID="31248d5ba0f8c4c1ed72a831327d9c848dc09040a5702e2e239a0512175c3046" Feb 23 00:32:36 crc kubenswrapper[4953]: I0223 00:32:36.447783 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31248d5ba0f8c4c1ed72a831327d9c848dc09040a5702e2e239a0512175c3046"} err="failed to get container status \"31248d5ba0f8c4c1ed72a831327d9c848dc09040a5702e2e239a0512175c3046\": rpc error: code = NotFound desc = could not find container \"31248d5ba0f8c4c1ed72a831327d9c848dc09040a5702e2e239a0512175c3046\": container with ID starting with 31248d5ba0f8c4c1ed72a831327d9c848dc09040a5702e2e239a0512175c3046 not found: ID does not exist" Feb 23 00:32:36 crc kubenswrapper[4953]: I0223 00:32:36.447838 4953 scope.go:117] "RemoveContainer" containerID="608088599b75735c3c78000fe17736f2c002b3773c9f85abc5b9e2117d3509bd" Feb 23 00:32:36 crc kubenswrapper[4953]: E0223 00:32:36.448366 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"608088599b75735c3c78000fe17736f2c002b3773c9f85abc5b9e2117d3509bd\": container with ID starting with 608088599b75735c3c78000fe17736f2c002b3773c9f85abc5b9e2117d3509bd not found: ID does not exist" containerID="608088599b75735c3c78000fe17736f2c002b3773c9f85abc5b9e2117d3509bd" Feb 23 00:32:36 crc kubenswrapper[4953]: I0223 00:32:36.448435 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"608088599b75735c3c78000fe17736f2c002b3773c9f85abc5b9e2117d3509bd"} err="failed to get container status \"608088599b75735c3c78000fe17736f2c002b3773c9f85abc5b9e2117d3509bd\": rpc error: code = NotFound desc = could not find container \"608088599b75735c3c78000fe17736f2c002b3773c9f85abc5b9e2117d3509bd\": container with ID starting with 608088599b75735c3c78000fe17736f2c002b3773c9f85abc5b9e2117d3509bd not found: ID does not exist" Feb 23 00:32:36 crc kubenswrapper[4953]: I0223 00:32:36.448475 4953 scope.go:117] "RemoveContainer" containerID="d6bcc9181842ea0a53c81d90c808e675518b6e5087cc7c5c52943eb1b396e686" Feb 23 00:32:36 crc kubenswrapper[4953]: E0223 00:32:36.448933 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6bcc9181842ea0a53c81d90c808e675518b6e5087cc7c5c52943eb1b396e686\": container with ID starting with d6bcc9181842ea0a53c81d90c808e675518b6e5087cc7c5c52943eb1b396e686 not found: ID does not exist" containerID="d6bcc9181842ea0a53c81d90c808e675518b6e5087cc7c5c52943eb1b396e686" Feb 23 00:32:36 crc kubenswrapper[4953]: I0223 00:32:36.449011 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6bcc9181842ea0a53c81d90c808e675518b6e5087cc7c5c52943eb1b396e686"} err="failed to get container status \"d6bcc9181842ea0a53c81d90c808e675518b6e5087cc7c5c52943eb1b396e686\": rpc error: code = NotFound desc = could not find container \"d6bcc9181842ea0a53c81d90c808e675518b6e5087cc7c5c52943eb1b396e686\": container with ID starting with d6bcc9181842ea0a53c81d90c808e675518b6e5087cc7c5c52943eb1b396e686 not found: ID does not exist" Feb 23 00:32:37 crc kubenswrapper[4953]: I0223 00:32:37.337476 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbb58c59-2b69-4250-990c-05bb920211c7" path="/var/lib/kubelet/pods/cbb58c59-2b69-4250-990c-05bb920211c7/volumes" Feb 23 00:32:39 crc kubenswrapper[4953]: E0223 00:32:39.342547 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:32:39 crc kubenswrapper[4953]: I0223 00:32:39.413411 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-94gsl"] Feb 23 00:32:39 crc kubenswrapper[4953]: E0223 00:32:39.414092 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbb58c59-2b69-4250-990c-05bb920211c7" containerName="registry-server" Feb 23 00:32:39 crc kubenswrapper[4953]: I0223 00:32:39.414205 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbb58c59-2b69-4250-990c-05bb920211c7" containerName="registry-server" Feb 23 00:32:39 crc kubenswrapper[4953]: E0223 00:32:39.414335 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbb58c59-2b69-4250-990c-05bb920211c7" containerName="extract-utilities" Feb 23 00:32:39 crc kubenswrapper[4953]: I0223 00:32:39.415449 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbb58c59-2b69-4250-990c-05bb920211c7" containerName="extract-utilities" Feb 23 00:32:39 crc kubenswrapper[4953]: E0223 00:32:39.415585 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbb58c59-2b69-4250-990c-05bb920211c7" containerName="extract-content" Feb 23 00:32:39 crc kubenswrapper[4953]: I0223 00:32:39.415665 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbb58c59-2b69-4250-990c-05bb920211c7" containerName="extract-content" Feb 23 00:32:39 crc kubenswrapper[4953]: I0223 00:32:39.415901 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbb58c59-2b69-4250-990c-05bb920211c7" containerName="registry-server" Feb 23 00:32:39 crc kubenswrapper[4953]: I0223 00:32:39.417176 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-94gsl" Feb 23 00:32:39 crc kubenswrapper[4953]: I0223 00:32:39.431691 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-94gsl"] Feb 23 00:32:39 crc kubenswrapper[4953]: I0223 00:32:39.501089 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqw85\" (UniqueName: \"kubernetes.io/projected/af9a4bcf-2982-45b5-bcad-35df537e5c9c-kube-api-access-lqw85\") pod \"certified-operators-94gsl\" (UID: \"af9a4bcf-2982-45b5-bcad-35df537e5c9c\") " pod="openshift-marketplace/certified-operators-94gsl" Feb 23 00:32:39 crc kubenswrapper[4953]: I0223 00:32:39.501372 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af9a4bcf-2982-45b5-bcad-35df537e5c9c-catalog-content\") pod \"certified-operators-94gsl\" (UID: \"af9a4bcf-2982-45b5-bcad-35df537e5c9c\") " pod="openshift-marketplace/certified-operators-94gsl" Feb 23 00:32:39 crc kubenswrapper[4953]: I0223 00:32:39.501425 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af9a4bcf-2982-45b5-bcad-35df537e5c9c-utilities\") pod \"certified-operators-94gsl\" (UID: \"af9a4bcf-2982-45b5-bcad-35df537e5c9c\") " pod="openshift-marketplace/certified-operators-94gsl" Feb 23 00:32:39 crc kubenswrapper[4953]: I0223 00:32:39.603597 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqw85\" (UniqueName: \"kubernetes.io/projected/af9a4bcf-2982-45b5-bcad-35df537e5c9c-kube-api-access-lqw85\") pod \"certified-operators-94gsl\" (UID: \"af9a4bcf-2982-45b5-bcad-35df537e5c9c\") " pod="openshift-marketplace/certified-operators-94gsl" Feb 23 00:32:39 crc kubenswrapper[4953]: I0223 00:32:39.603713 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af9a4bcf-2982-45b5-bcad-35df537e5c9c-catalog-content\") pod \"certified-operators-94gsl\" (UID: \"af9a4bcf-2982-45b5-bcad-35df537e5c9c\") " pod="openshift-marketplace/certified-operators-94gsl" Feb 23 00:32:39 crc kubenswrapper[4953]: I0223 00:32:39.603742 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af9a4bcf-2982-45b5-bcad-35df537e5c9c-utilities\") pod \"certified-operators-94gsl\" (UID: \"af9a4bcf-2982-45b5-bcad-35df537e5c9c\") " pod="openshift-marketplace/certified-operators-94gsl" Feb 23 00:32:39 crc kubenswrapper[4953]: I0223 00:32:39.604510 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af9a4bcf-2982-45b5-bcad-35df537e5c9c-utilities\") pod \"certified-operators-94gsl\" (UID: \"af9a4bcf-2982-45b5-bcad-35df537e5c9c\") " pod="openshift-marketplace/certified-operators-94gsl" Feb 23 00:32:39 crc kubenswrapper[4953]: I0223 00:32:39.604523 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af9a4bcf-2982-45b5-bcad-35df537e5c9c-catalog-content\") pod \"certified-operators-94gsl\" (UID: \"af9a4bcf-2982-45b5-bcad-35df537e5c9c\") " pod="openshift-marketplace/certified-operators-94gsl" Feb 23 00:32:39 crc kubenswrapper[4953]: I0223 00:32:39.642261 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqw85\" (UniqueName: \"kubernetes.io/projected/af9a4bcf-2982-45b5-bcad-35df537e5c9c-kube-api-access-lqw85\") pod \"certified-operators-94gsl\" (UID: \"af9a4bcf-2982-45b5-bcad-35df537e5c9c\") " pod="openshift-marketplace/certified-operators-94gsl" Feb 23 00:32:39 crc kubenswrapper[4953]: I0223 00:32:39.753607 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-94gsl" Feb 23 00:32:40 crc kubenswrapper[4953]: I0223 00:32:40.014888 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-94gsl"] Feb 23 00:32:40 crc kubenswrapper[4953]: I0223 00:32:40.381836 4953 generic.go:334] "Generic (PLEG): container finished" podID="af9a4bcf-2982-45b5-bcad-35df537e5c9c" containerID="3b1980cf6e84eeef50f23b996e6ef11e1e93bccf08de0774e6b0d15f79083c26" exitCode=0 Feb 23 00:32:40 crc kubenswrapper[4953]: I0223 00:32:40.381935 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94gsl" event={"ID":"af9a4bcf-2982-45b5-bcad-35df537e5c9c","Type":"ContainerDied","Data":"3b1980cf6e84eeef50f23b996e6ef11e1e93bccf08de0774e6b0d15f79083c26"} Feb 23 00:32:40 crc kubenswrapper[4953]: I0223 00:32:40.381975 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94gsl" event={"ID":"af9a4bcf-2982-45b5-bcad-35df537e5c9c","Type":"ContainerStarted","Data":"0d9a2181e86bb82063d43148df4f05a1eea158c2441958014f94ee5675748892"} Feb 23 00:32:40 crc kubenswrapper[4953]: I0223 00:32:40.383887 4953 generic.go:334] "Generic (PLEG): container finished" podID="38bc69a6-4156-4dc4-bcd3-cfaee3010fe7" containerID="8f23c1d775a134e6de61ad400d98e6b5e513343679f52ab3bd5fb051afa37447" exitCode=0 Feb 23 00:32:40 crc kubenswrapper[4953]: I0223 00:32:40.383941 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2px74/must-gather-5b84w" event={"ID":"38bc69a6-4156-4dc4-bcd3-cfaee3010fe7","Type":"ContainerDied","Data":"8f23c1d775a134e6de61ad400d98e6b5e513343679f52ab3bd5fb051afa37447"} Feb 23 00:32:40 crc kubenswrapper[4953]: I0223 00:32:40.384771 4953 scope.go:117] "RemoveContainer" containerID="8f23c1d775a134e6de61ad400d98e6b5e513343679f52ab3bd5fb051afa37447" Feb 23 00:32:40 crc kubenswrapper[4953]: I0223 00:32:40.732520 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2px74_must-gather-5b84w_38bc69a6-4156-4dc4-bcd3-cfaee3010fe7/gather/0.log" Feb 23 00:32:41 crc kubenswrapper[4953]: I0223 00:32:41.403588 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94gsl" event={"ID":"af9a4bcf-2982-45b5-bcad-35df537e5c9c","Type":"ContainerStarted","Data":"fb734080a7e4be51645511641b9dd28b2565550cf6f25d6cadb7a526a776794d"} Feb 23 00:32:42 crc kubenswrapper[4953]: I0223 00:32:42.415336 4953 generic.go:334] "Generic (PLEG): container finished" podID="af9a4bcf-2982-45b5-bcad-35df537e5c9c" containerID="fb734080a7e4be51645511641b9dd28b2565550cf6f25d6cadb7a526a776794d" exitCode=0 Feb 23 00:32:42 crc kubenswrapper[4953]: I0223 00:32:42.415406 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94gsl" event={"ID":"af9a4bcf-2982-45b5-bcad-35df537e5c9c","Type":"ContainerDied","Data":"fb734080a7e4be51645511641b9dd28b2565550cf6f25d6cadb7a526a776794d"} Feb 23 00:32:43 crc kubenswrapper[4953]: E0223 00:32:43.332830 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:32:43 crc kubenswrapper[4953]: I0223 00:32:43.426037 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94gsl" event={"ID":"af9a4bcf-2982-45b5-bcad-35df537e5c9c","Type":"ContainerStarted","Data":"faa5c6a92353d8e73503fe1ef2b51037880f3a98031e9af5e15025da46f8daba"} Feb 23 00:32:43 crc kubenswrapper[4953]: I0223 00:32:43.453721 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-94gsl" podStartSLOduration=1.988275545 podStartE2EDuration="4.453687273s" podCreationTimestamp="2026-02-23 00:32:39 +0000 UTC" firstStartedPulling="2026-02-23 00:32:40.384311919 +0000 UTC m=+1558.318153765" lastFinishedPulling="2026-02-23 00:32:42.849723647 +0000 UTC m=+1560.783565493" observedRunningTime="2026-02-23 00:32:43.445558434 +0000 UTC m=+1561.379400290" watchObservedRunningTime="2026-02-23 00:32:43.453687273 +0000 UTC m=+1561.387529159" Feb 23 00:32:47 crc kubenswrapper[4953]: I0223 00:32:47.843477 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2px74/must-gather-5b84w"] Feb 23 00:32:47 crc kubenswrapper[4953]: I0223 00:32:47.845842 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-2px74/must-gather-5b84w" podUID="38bc69a6-4156-4dc4-bcd3-cfaee3010fe7" containerName="copy" containerID="cri-o://0e906cfc7b5d4a8eea4de13b287a2c07c5583ac51237c2a5410fcad69a7becef" gracePeriod=2 Feb 23 00:32:47 crc kubenswrapper[4953]: I0223 00:32:47.849616 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2px74/must-gather-5b84w"] Feb 23 00:32:48 crc kubenswrapper[4953]: I0223 00:32:48.276723 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2px74_must-gather-5b84w_38bc69a6-4156-4dc4-bcd3-cfaee3010fe7/copy/0.log" Feb 23 00:32:48 crc kubenswrapper[4953]: I0223 00:32:48.277689 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2px74/must-gather-5b84w" Feb 23 00:32:48 crc kubenswrapper[4953]: I0223 00:32:48.353554 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9tss\" (UniqueName: \"kubernetes.io/projected/38bc69a6-4156-4dc4-bcd3-cfaee3010fe7-kube-api-access-k9tss\") pod \"38bc69a6-4156-4dc4-bcd3-cfaee3010fe7\" (UID: \"38bc69a6-4156-4dc4-bcd3-cfaee3010fe7\") " Feb 23 00:32:48 crc kubenswrapper[4953]: I0223 00:32:48.353675 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/38bc69a6-4156-4dc4-bcd3-cfaee3010fe7-must-gather-output\") pod \"38bc69a6-4156-4dc4-bcd3-cfaee3010fe7\" (UID: \"38bc69a6-4156-4dc4-bcd3-cfaee3010fe7\") " Feb 23 00:32:48 crc kubenswrapper[4953]: I0223 00:32:48.364499 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38bc69a6-4156-4dc4-bcd3-cfaee3010fe7-kube-api-access-k9tss" (OuterVolumeSpecName: "kube-api-access-k9tss") pod "38bc69a6-4156-4dc4-bcd3-cfaee3010fe7" (UID: "38bc69a6-4156-4dc4-bcd3-cfaee3010fe7"). InnerVolumeSpecName "kube-api-access-k9tss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:32:48 crc kubenswrapper[4953]: I0223 00:32:48.407496 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38bc69a6-4156-4dc4-bcd3-cfaee3010fe7-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "38bc69a6-4156-4dc4-bcd3-cfaee3010fe7" (UID: "38bc69a6-4156-4dc4-bcd3-cfaee3010fe7"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:32:48 crc kubenswrapper[4953]: I0223 00:32:48.457355 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9tss\" (UniqueName: \"kubernetes.io/projected/38bc69a6-4156-4dc4-bcd3-cfaee3010fe7-kube-api-access-k9tss\") on node \"crc\" DevicePath \"\"" Feb 23 00:32:48 crc kubenswrapper[4953]: I0223 00:32:48.457400 4953 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/38bc69a6-4156-4dc4-bcd3-cfaee3010fe7-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 23 00:32:48 crc kubenswrapper[4953]: I0223 00:32:48.469175 4953 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2px74_must-gather-5b84w_38bc69a6-4156-4dc4-bcd3-cfaee3010fe7/copy/0.log" Feb 23 00:32:48 crc kubenswrapper[4953]: I0223 00:32:48.469606 4953 generic.go:334] "Generic (PLEG): container finished" podID="38bc69a6-4156-4dc4-bcd3-cfaee3010fe7" containerID="0e906cfc7b5d4a8eea4de13b287a2c07c5583ac51237c2a5410fcad69a7becef" exitCode=143 Feb 23 00:32:48 crc kubenswrapper[4953]: I0223 00:32:48.469663 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2px74/must-gather-5b84w" Feb 23 00:32:48 crc kubenswrapper[4953]: I0223 00:32:48.469670 4953 scope.go:117] "RemoveContainer" containerID="0e906cfc7b5d4a8eea4de13b287a2c07c5583ac51237c2a5410fcad69a7becef" Feb 23 00:32:48 crc kubenswrapper[4953]: I0223 00:32:48.490404 4953 scope.go:117] "RemoveContainer" containerID="8f23c1d775a134e6de61ad400d98e6b5e513343679f52ab3bd5fb051afa37447" Feb 23 00:32:48 crc kubenswrapper[4953]: I0223 00:32:48.587105 4953 scope.go:117] "RemoveContainer" containerID="0e906cfc7b5d4a8eea4de13b287a2c07c5583ac51237c2a5410fcad69a7becef" Feb 23 00:32:48 crc kubenswrapper[4953]: E0223 00:32:48.588163 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e906cfc7b5d4a8eea4de13b287a2c07c5583ac51237c2a5410fcad69a7becef\": container with ID starting with 0e906cfc7b5d4a8eea4de13b287a2c07c5583ac51237c2a5410fcad69a7becef not found: ID does not exist" containerID="0e906cfc7b5d4a8eea4de13b287a2c07c5583ac51237c2a5410fcad69a7becef" Feb 23 00:32:48 crc kubenswrapper[4953]: I0223 00:32:48.588223 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e906cfc7b5d4a8eea4de13b287a2c07c5583ac51237c2a5410fcad69a7becef"} err="failed to get container status \"0e906cfc7b5d4a8eea4de13b287a2c07c5583ac51237c2a5410fcad69a7becef\": rpc error: code = NotFound desc = could not find container \"0e906cfc7b5d4a8eea4de13b287a2c07c5583ac51237c2a5410fcad69a7becef\": container with ID starting with 0e906cfc7b5d4a8eea4de13b287a2c07c5583ac51237c2a5410fcad69a7becef not found: ID does not exist" Feb 23 00:32:48 crc kubenswrapper[4953]: I0223 00:32:48.588253 4953 scope.go:117] "RemoveContainer" containerID="8f23c1d775a134e6de61ad400d98e6b5e513343679f52ab3bd5fb051afa37447" Feb 23 00:32:48 crc kubenswrapper[4953]: E0223 00:32:48.588556 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f23c1d775a134e6de61ad400d98e6b5e513343679f52ab3bd5fb051afa37447\": container with ID starting with 8f23c1d775a134e6de61ad400d98e6b5e513343679f52ab3bd5fb051afa37447 not found: ID does not exist" containerID="8f23c1d775a134e6de61ad400d98e6b5e513343679f52ab3bd5fb051afa37447" Feb 23 00:32:48 crc kubenswrapper[4953]: I0223 00:32:48.588606 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f23c1d775a134e6de61ad400d98e6b5e513343679f52ab3bd5fb051afa37447"} err="failed to get container status \"8f23c1d775a134e6de61ad400d98e6b5e513343679f52ab3bd5fb051afa37447\": rpc error: code = NotFound desc = could not find container \"8f23c1d775a134e6de61ad400d98e6b5e513343679f52ab3bd5fb051afa37447\": container with ID starting with 8f23c1d775a134e6de61ad400d98e6b5e513343679f52ab3bd5fb051afa37447 not found: ID does not exist" Feb 23 00:32:49 crc kubenswrapper[4953]: I0223 00:32:49.335519 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38bc69a6-4156-4dc4-bcd3-cfaee3010fe7" path="/var/lib/kubelet/pods/38bc69a6-4156-4dc4-bcd3-cfaee3010fe7/volumes" Feb 23 00:32:49 crc kubenswrapper[4953]: I0223 00:32:49.754779 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-94gsl" Feb 23 00:32:49 crc kubenswrapper[4953]: I0223 00:32:49.754827 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-94gsl" Feb 23 00:32:49 crc kubenswrapper[4953]: I0223 00:32:49.801978 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-94gsl" Feb 23 00:32:50 crc kubenswrapper[4953]: E0223 00:32:50.329078 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:32:50 crc kubenswrapper[4953]: I0223 00:32:50.553766 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-94gsl" Feb 23 00:32:50 crc kubenswrapper[4953]: I0223 00:32:50.620063 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-94gsl"] Feb 23 00:32:52 crc kubenswrapper[4953]: I0223 00:32:52.501582 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-94gsl" podUID="af9a4bcf-2982-45b5-bcad-35df537e5c9c" containerName="registry-server" containerID="cri-o://faa5c6a92353d8e73503fe1ef2b51037880f3a98031e9af5e15025da46f8daba" gracePeriod=2 Feb 23 00:32:52 crc kubenswrapper[4953]: I0223 00:32:52.993469 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-94gsl" Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.045538 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af9a4bcf-2982-45b5-bcad-35df537e5c9c-catalog-content\") pod \"af9a4bcf-2982-45b5-bcad-35df537e5c9c\" (UID: \"af9a4bcf-2982-45b5-bcad-35df537e5c9c\") " Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.045669 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af9a4bcf-2982-45b5-bcad-35df537e5c9c-utilities\") pod \"af9a4bcf-2982-45b5-bcad-35df537e5c9c\" (UID: \"af9a4bcf-2982-45b5-bcad-35df537e5c9c\") " Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.045825 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqw85\" (UniqueName: \"kubernetes.io/projected/af9a4bcf-2982-45b5-bcad-35df537e5c9c-kube-api-access-lqw85\") pod \"af9a4bcf-2982-45b5-bcad-35df537e5c9c\" (UID: \"af9a4bcf-2982-45b5-bcad-35df537e5c9c\") " Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.047510 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af9a4bcf-2982-45b5-bcad-35df537e5c9c-utilities" (OuterVolumeSpecName: "utilities") pod "af9a4bcf-2982-45b5-bcad-35df537e5c9c" (UID: "af9a4bcf-2982-45b5-bcad-35df537e5c9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.055589 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af9a4bcf-2982-45b5-bcad-35df537e5c9c-kube-api-access-lqw85" (OuterVolumeSpecName: "kube-api-access-lqw85") pod "af9a4bcf-2982-45b5-bcad-35df537e5c9c" (UID: "af9a4bcf-2982-45b5-bcad-35df537e5c9c"). InnerVolumeSpecName "kube-api-access-lqw85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.106906 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af9a4bcf-2982-45b5-bcad-35df537e5c9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af9a4bcf-2982-45b5-bcad-35df537e5c9c" (UID: "af9a4bcf-2982-45b5-bcad-35df537e5c9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.147096 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqw85\" (UniqueName: \"kubernetes.io/projected/af9a4bcf-2982-45b5-bcad-35df537e5c9c-kube-api-access-lqw85\") on node \"crc\" DevicePath \"\"" Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.147145 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af9a4bcf-2982-45b5-bcad-35df537e5c9c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.147160 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af9a4bcf-2982-45b5-bcad-35df537e5c9c-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.511467 4953 generic.go:334] "Generic (PLEG): container finished" podID="af9a4bcf-2982-45b5-bcad-35df537e5c9c" containerID="faa5c6a92353d8e73503fe1ef2b51037880f3a98031e9af5e15025da46f8daba" exitCode=0 Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.511521 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94gsl" event={"ID":"af9a4bcf-2982-45b5-bcad-35df537e5c9c","Type":"ContainerDied","Data":"faa5c6a92353d8e73503fe1ef2b51037880f3a98031e9af5e15025da46f8daba"} Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.511559 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94gsl" event={"ID":"af9a4bcf-2982-45b5-bcad-35df537e5c9c","Type":"ContainerDied","Data":"0d9a2181e86bb82063d43148df4f05a1eea158c2441958014f94ee5675748892"} Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.511600 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-94gsl" Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.511599 4953 scope.go:117] "RemoveContainer" containerID="faa5c6a92353d8e73503fe1ef2b51037880f3a98031e9af5e15025da46f8daba" Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.546738 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-94gsl"] Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.548216 4953 scope.go:117] "RemoveContainer" containerID="fb734080a7e4be51645511641b9dd28b2565550cf6f25d6cadb7a526a776794d" Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.556524 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-94gsl"] Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.597184 4953 scope.go:117] "RemoveContainer" containerID="3b1980cf6e84eeef50f23b996e6ef11e1e93bccf08de0774e6b0d15f79083c26" Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.618236 4953 scope.go:117] "RemoveContainer" containerID="faa5c6a92353d8e73503fe1ef2b51037880f3a98031e9af5e15025da46f8daba" Feb 23 00:32:53 crc kubenswrapper[4953]: E0223 00:32:53.618831 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faa5c6a92353d8e73503fe1ef2b51037880f3a98031e9af5e15025da46f8daba\": container with ID starting with faa5c6a92353d8e73503fe1ef2b51037880f3a98031e9af5e15025da46f8daba not found: ID does not exist" containerID="faa5c6a92353d8e73503fe1ef2b51037880f3a98031e9af5e15025da46f8daba" Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.618863 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faa5c6a92353d8e73503fe1ef2b51037880f3a98031e9af5e15025da46f8daba"} err="failed to get container status \"faa5c6a92353d8e73503fe1ef2b51037880f3a98031e9af5e15025da46f8daba\": rpc error: code = NotFound desc = could not find container \"faa5c6a92353d8e73503fe1ef2b51037880f3a98031e9af5e15025da46f8daba\": container with ID starting with faa5c6a92353d8e73503fe1ef2b51037880f3a98031e9af5e15025da46f8daba not found: ID does not exist" Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.618903 4953 scope.go:117] "RemoveContainer" containerID="fb734080a7e4be51645511641b9dd28b2565550cf6f25d6cadb7a526a776794d" Feb 23 00:32:53 crc kubenswrapper[4953]: E0223 00:32:53.619270 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb734080a7e4be51645511641b9dd28b2565550cf6f25d6cadb7a526a776794d\": container with ID starting with fb734080a7e4be51645511641b9dd28b2565550cf6f25d6cadb7a526a776794d not found: ID does not exist" containerID="fb734080a7e4be51645511641b9dd28b2565550cf6f25d6cadb7a526a776794d" Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.619348 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb734080a7e4be51645511641b9dd28b2565550cf6f25d6cadb7a526a776794d"} err="failed to get container status \"fb734080a7e4be51645511641b9dd28b2565550cf6f25d6cadb7a526a776794d\": rpc error: code = NotFound desc = could not find container \"fb734080a7e4be51645511641b9dd28b2565550cf6f25d6cadb7a526a776794d\": container with ID starting with fb734080a7e4be51645511641b9dd28b2565550cf6f25d6cadb7a526a776794d not found: ID does not exist" Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.619390 4953 scope.go:117] "RemoveContainer" containerID="3b1980cf6e84eeef50f23b996e6ef11e1e93bccf08de0774e6b0d15f79083c26" Feb 23 00:32:53 crc kubenswrapper[4953]: E0223 00:32:53.619716 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b1980cf6e84eeef50f23b996e6ef11e1e93bccf08de0774e6b0d15f79083c26\": container with ID starting with 3b1980cf6e84eeef50f23b996e6ef11e1e93bccf08de0774e6b0d15f79083c26 not found: ID does not exist" containerID="3b1980cf6e84eeef50f23b996e6ef11e1e93bccf08de0774e6b0d15f79083c26" Feb 23 00:32:53 crc kubenswrapper[4953]: I0223 00:32:53.619759 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b1980cf6e84eeef50f23b996e6ef11e1e93bccf08de0774e6b0d15f79083c26"} err="failed to get container status \"3b1980cf6e84eeef50f23b996e6ef11e1e93bccf08de0774e6b0d15f79083c26\": rpc error: code = NotFound desc = could not find container \"3b1980cf6e84eeef50f23b996e6ef11e1e93bccf08de0774e6b0d15f79083c26\": container with ID starting with 3b1980cf6e84eeef50f23b996e6ef11e1e93bccf08de0774e6b0d15f79083c26 not found: ID does not exist" Feb 23 00:32:55 crc kubenswrapper[4953]: I0223 00:32:55.344024 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af9a4bcf-2982-45b5-bcad-35df537e5c9c" path="/var/lib/kubelet/pods/af9a4bcf-2982-45b5-bcad-35df537e5c9c/volumes" Feb 23 00:32:58 crc kubenswrapper[4953]: E0223 00:32:58.330149 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.752544 4953 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h6k4g"] Feb 23 00:33:03 crc kubenswrapper[4953]: E0223 00:33:03.753758 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38bc69a6-4156-4dc4-bcd3-cfaee3010fe7" containerName="gather" Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.753776 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="38bc69a6-4156-4dc4-bcd3-cfaee3010fe7" containerName="gather" Feb 23 00:33:03 crc kubenswrapper[4953]: E0223 00:33:03.753795 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38bc69a6-4156-4dc4-bcd3-cfaee3010fe7" containerName="copy" Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.753802 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="38bc69a6-4156-4dc4-bcd3-cfaee3010fe7" containerName="copy" Feb 23 00:33:03 crc kubenswrapper[4953]: E0223 00:33:03.753818 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af9a4bcf-2982-45b5-bcad-35df537e5c9c" containerName="extract-content" Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.753826 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="af9a4bcf-2982-45b5-bcad-35df537e5c9c" containerName="extract-content" Feb 23 00:33:03 crc kubenswrapper[4953]: E0223 00:33:03.754388 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af9a4bcf-2982-45b5-bcad-35df537e5c9c" containerName="registry-server" Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.754404 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="af9a4bcf-2982-45b5-bcad-35df537e5c9c" containerName="registry-server" Feb 23 00:33:03 crc kubenswrapper[4953]: E0223 00:33:03.754420 4953 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af9a4bcf-2982-45b5-bcad-35df537e5c9c" containerName="extract-utilities" Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.754428 4953 state_mem.go:107] "Deleted CPUSet assignment" podUID="af9a4bcf-2982-45b5-bcad-35df537e5c9c" containerName="extract-utilities" Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.754571 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="af9a4bcf-2982-45b5-bcad-35df537e5c9c" containerName="registry-server" Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.754584 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="38bc69a6-4156-4dc4-bcd3-cfaee3010fe7" containerName="copy" Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.754599 4953 memory_manager.go:354] "RemoveStaleState removing state" podUID="38bc69a6-4156-4dc4-bcd3-cfaee3010fe7" containerName="gather" Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.755698 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6k4g" Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.769133 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h6k4g"] Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.871047 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9362e25d-f156-4bbe-b1ec-92784c3e497b-catalog-content\") pod \"community-operators-h6k4g\" (UID: \"9362e25d-f156-4bbe-b1ec-92784c3e497b\") " pod="openshift-marketplace/community-operators-h6k4g" Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.871158 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws24z\" (UniqueName: \"kubernetes.io/projected/9362e25d-f156-4bbe-b1ec-92784c3e497b-kube-api-access-ws24z\") pod \"community-operators-h6k4g\" (UID: \"9362e25d-f156-4bbe-b1ec-92784c3e497b\") " pod="openshift-marketplace/community-operators-h6k4g" Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.871191 4953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9362e25d-f156-4bbe-b1ec-92784c3e497b-utilities\") pod \"community-operators-h6k4g\" (UID: \"9362e25d-f156-4bbe-b1ec-92784c3e497b\") " pod="openshift-marketplace/community-operators-h6k4g" Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.973310 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9362e25d-f156-4bbe-b1ec-92784c3e497b-catalog-content\") pod \"community-operators-h6k4g\" (UID: \"9362e25d-f156-4bbe-b1ec-92784c3e497b\") " pod="openshift-marketplace/community-operators-h6k4g" Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.973398 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws24z\" (UniqueName: \"kubernetes.io/projected/9362e25d-f156-4bbe-b1ec-92784c3e497b-kube-api-access-ws24z\") pod \"community-operators-h6k4g\" (UID: \"9362e25d-f156-4bbe-b1ec-92784c3e497b\") " pod="openshift-marketplace/community-operators-h6k4g" Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.973435 4953 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9362e25d-f156-4bbe-b1ec-92784c3e497b-utilities\") pod \"community-operators-h6k4g\" (UID: \"9362e25d-f156-4bbe-b1ec-92784c3e497b\") " pod="openshift-marketplace/community-operators-h6k4g" Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.974062 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9362e25d-f156-4bbe-b1ec-92784c3e497b-catalog-content\") pod \"community-operators-h6k4g\" (UID: \"9362e25d-f156-4bbe-b1ec-92784c3e497b\") " pod="openshift-marketplace/community-operators-h6k4g" Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.974123 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9362e25d-f156-4bbe-b1ec-92784c3e497b-utilities\") pod \"community-operators-h6k4g\" (UID: \"9362e25d-f156-4bbe-b1ec-92784c3e497b\") " pod="openshift-marketplace/community-operators-h6k4g" Feb 23 00:33:03 crc kubenswrapper[4953]: I0223 00:33:03.998649 4953 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws24z\" (UniqueName: \"kubernetes.io/projected/9362e25d-f156-4bbe-b1ec-92784c3e497b-kube-api-access-ws24z\") pod \"community-operators-h6k4g\" (UID: \"9362e25d-f156-4bbe-b1ec-92784c3e497b\") " pod="openshift-marketplace/community-operators-h6k4g" Feb 23 00:33:04 crc kubenswrapper[4953]: I0223 00:33:04.078789 4953 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6k4g" Feb 23 00:33:04 crc kubenswrapper[4953]: I0223 00:33:04.386148 4953 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h6k4g"] Feb 23 00:33:04 crc kubenswrapper[4953]: I0223 00:33:04.625682 4953 generic.go:334] "Generic (PLEG): container finished" podID="9362e25d-f156-4bbe-b1ec-92784c3e497b" containerID="2112331a329a3981bc6a12031003a3695b1153ac849c57423c8f47d00251e6c9" exitCode=0 Feb 23 00:33:04 crc kubenswrapper[4953]: I0223 00:33:04.625766 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6k4g" event={"ID":"9362e25d-f156-4bbe-b1ec-92784c3e497b","Type":"ContainerDied","Data":"2112331a329a3981bc6a12031003a3695b1153ac849c57423c8f47d00251e6c9"} Feb 23 00:33:04 crc kubenswrapper[4953]: I0223 00:33:04.625861 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6k4g" event={"ID":"9362e25d-f156-4bbe-b1ec-92784c3e497b","Type":"ContainerStarted","Data":"fe78ed361f19c402eaa3c5b4456cbcee279da5924afed258e0b08e86cbb50628"} Feb 23 00:33:05 crc kubenswrapper[4953]: E0223 00:33:05.329145 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:33:05 crc kubenswrapper[4953]: I0223 00:33:05.636127 4953 generic.go:334] "Generic (PLEG): container finished" podID="9362e25d-f156-4bbe-b1ec-92784c3e497b" containerID="4702cc89a671a798ad110f13a99f91441bf2bf9ac977a697bf65a5683cf16df3" exitCode=0 Feb 23 00:33:05 crc kubenswrapper[4953]: I0223 00:33:05.636183 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6k4g" event={"ID":"9362e25d-f156-4bbe-b1ec-92784c3e497b","Type":"ContainerDied","Data":"4702cc89a671a798ad110f13a99f91441bf2bf9ac977a697bf65a5683cf16df3"} Feb 23 00:33:06 crc kubenswrapper[4953]: I0223 00:33:06.647251 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6k4g" event={"ID":"9362e25d-f156-4bbe-b1ec-92784c3e497b","Type":"ContainerStarted","Data":"a24de949830e4e06e8f1271a212d8409eda7ca86f604ad369e2f68f009792e7e"} Feb 23 00:33:06 crc kubenswrapper[4953]: I0223 00:33:06.671593 4953 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h6k4g" podStartSLOduration=2.257440303 podStartE2EDuration="3.6715663s" podCreationTimestamp="2026-02-23 00:33:03 +0000 UTC" firstStartedPulling="2026-02-23 00:33:04.628564097 +0000 UTC m=+1582.562405983" lastFinishedPulling="2026-02-23 00:33:06.042690124 +0000 UTC m=+1583.976531980" observedRunningTime="2026-02-23 00:33:06.670056631 +0000 UTC m=+1584.603898517" watchObservedRunningTime="2026-02-23 00:33:06.6715663 +0000 UTC m=+1584.605408186" Feb 23 00:33:09 crc kubenswrapper[4953]: E0223 00:33:09.331778 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:33:14 crc kubenswrapper[4953]: I0223 00:33:14.079366 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h6k4g" Feb 23 00:33:14 crc kubenswrapper[4953]: I0223 00:33:14.079774 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h6k4g" Feb 23 00:33:14 crc kubenswrapper[4953]: I0223 00:33:14.132586 4953 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h6k4g" Feb 23 00:33:14 crc kubenswrapper[4953]: I0223 00:33:14.700116 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:33:14 crc kubenswrapper[4953]: I0223 00:33:14.700720 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:33:14 crc kubenswrapper[4953]: I0223 00:33:14.752449 4953 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h6k4g" Feb 23 00:33:14 crc kubenswrapper[4953]: I0223 00:33:14.814672 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h6k4g"] Feb 23 00:33:16 crc kubenswrapper[4953]: I0223 00:33:16.722854 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h6k4g" podUID="9362e25d-f156-4bbe-b1ec-92784c3e497b" containerName="registry-server" containerID="cri-o://a24de949830e4e06e8f1271a212d8409eda7ca86f604ad369e2f68f009792e7e" gracePeriod=2 Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.161568 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6k4g" Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.294656 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9362e25d-f156-4bbe-b1ec-92784c3e497b-catalog-content\") pod \"9362e25d-f156-4bbe-b1ec-92784c3e497b\" (UID: \"9362e25d-f156-4bbe-b1ec-92784c3e497b\") " Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.294741 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9362e25d-f156-4bbe-b1ec-92784c3e497b-utilities\") pod \"9362e25d-f156-4bbe-b1ec-92784c3e497b\" (UID: \"9362e25d-f156-4bbe-b1ec-92784c3e497b\") " Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.294799 4953 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws24z\" (UniqueName: \"kubernetes.io/projected/9362e25d-f156-4bbe-b1ec-92784c3e497b-kube-api-access-ws24z\") pod \"9362e25d-f156-4bbe-b1ec-92784c3e497b\" (UID: \"9362e25d-f156-4bbe-b1ec-92784c3e497b\") " Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.296306 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9362e25d-f156-4bbe-b1ec-92784c3e497b-utilities" (OuterVolumeSpecName: "utilities") pod "9362e25d-f156-4bbe-b1ec-92784c3e497b" (UID: "9362e25d-f156-4bbe-b1ec-92784c3e497b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.304453 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9362e25d-f156-4bbe-b1ec-92784c3e497b-kube-api-access-ws24z" (OuterVolumeSpecName: "kube-api-access-ws24z") pod "9362e25d-f156-4bbe-b1ec-92784c3e497b" (UID: "9362e25d-f156-4bbe-b1ec-92784c3e497b"). InnerVolumeSpecName "kube-api-access-ws24z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.346632 4953 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9362e25d-f156-4bbe-b1ec-92784c3e497b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9362e25d-f156-4bbe-b1ec-92784c3e497b" (UID: "9362e25d-f156-4bbe-b1ec-92784c3e497b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.397390 4953 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9362e25d-f156-4bbe-b1ec-92784c3e497b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.397449 4953 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9362e25d-f156-4bbe-b1ec-92784c3e497b-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.397473 4953 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws24z\" (UniqueName: \"kubernetes.io/projected/9362e25d-f156-4bbe-b1ec-92784c3e497b-kube-api-access-ws24z\") on node \"crc\" DevicePath \"\"" Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.735424 4953 generic.go:334] "Generic (PLEG): container finished" podID="9362e25d-f156-4bbe-b1ec-92784c3e497b" containerID="a24de949830e4e06e8f1271a212d8409eda7ca86f604ad369e2f68f009792e7e" exitCode=0 Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.735520 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6k4g" event={"ID":"9362e25d-f156-4bbe-b1ec-92784c3e497b","Type":"ContainerDied","Data":"a24de949830e4e06e8f1271a212d8409eda7ca86f604ad369e2f68f009792e7e"} Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.735582 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6k4g" event={"ID":"9362e25d-f156-4bbe-b1ec-92784c3e497b","Type":"ContainerDied","Data":"fe78ed361f19c402eaa3c5b4456cbcee279da5924afed258e0b08e86cbb50628"} Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.735627 4953 scope.go:117] "RemoveContainer" containerID="a24de949830e4e06e8f1271a212d8409eda7ca86f604ad369e2f68f009792e7e" Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.735738 4953 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6k4g" Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.763557 4953 scope.go:117] "RemoveContainer" containerID="4702cc89a671a798ad110f13a99f91441bf2bf9ac977a697bf65a5683cf16df3" Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.785557 4953 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h6k4g"] Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.792110 4953 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h6k4g"] Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.809267 4953 scope.go:117] "RemoveContainer" containerID="2112331a329a3981bc6a12031003a3695b1153ac849c57423c8f47d00251e6c9" Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.835032 4953 scope.go:117] "RemoveContainer" containerID="a24de949830e4e06e8f1271a212d8409eda7ca86f604ad369e2f68f009792e7e" Feb 23 00:33:17 crc kubenswrapper[4953]: E0223 00:33:17.836254 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a24de949830e4e06e8f1271a212d8409eda7ca86f604ad369e2f68f009792e7e\": container with ID starting with a24de949830e4e06e8f1271a212d8409eda7ca86f604ad369e2f68f009792e7e not found: ID does not exist" containerID="a24de949830e4e06e8f1271a212d8409eda7ca86f604ad369e2f68f009792e7e" Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.836326 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a24de949830e4e06e8f1271a212d8409eda7ca86f604ad369e2f68f009792e7e"} err="failed to get container status \"a24de949830e4e06e8f1271a212d8409eda7ca86f604ad369e2f68f009792e7e\": rpc error: code = NotFound desc = could not find container \"a24de949830e4e06e8f1271a212d8409eda7ca86f604ad369e2f68f009792e7e\": container with ID starting with a24de949830e4e06e8f1271a212d8409eda7ca86f604ad369e2f68f009792e7e not found: ID does not exist" Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.836360 4953 scope.go:117] "RemoveContainer" containerID="4702cc89a671a798ad110f13a99f91441bf2bf9ac977a697bf65a5683cf16df3" Feb 23 00:33:17 crc kubenswrapper[4953]: E0223 00:33:17.836825 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4702cc89a671a798ad110f13a99f91441bf2bf9ac977a697bf65a5683cf16df3\": container with ID starting with 4702cc89a671a798ad110f13a99f91441bf2bf9ac977a697bf65a5683cf16df3 not found: ID does not exist" containerID="4702cc89a671a798ad110f13a99f91441bf2bf9ac977a697bf65a5683cf16df3" Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.836864 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4702cc89a671a798ad110f13a99f91441bf2bf9ac977a697bf65a5683cf16df3"} err="failed to get container status \"4702cc89a671a798ad110f13a99f91441bf2bf9ac977a697bf65a5683cf16df3\": rpc error: code = NotFound desc = could not find container \"4702cc89a671a798ad110f13a99f91441bf2bf9ac977a697bf65a5683cf16df3\": container with ID starting with 4702cc89a671a798ad110f13a99f91441bf2bf9ac977a697bf65a5683cf16df3 not found: ID does not exist" Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.836892 4953 scope.go:117] "RemoveContainer" containerID="2112331a329a3981bc6a12031003a3695b1153ac849c57423c8f47d00251e6c9" Feb 23 00:33:17 crc kubenswrapper[4953]: E0223 00:33:17.837420 4953 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2112331a329a3981bc6a12031003a3695b1153ac849c57423c8f47d00251e6c9\": container with ID starting with 2112331a329a3981bc6a12031003a3695b1153ac849c57423c8f47d00251e6c9 not found: ID does not exist" containerID="2112331a329a3981bc6a12031003a3695b1153ac849c57423c8f47d00251e6c9" Feb 23 00:33:17 crc kubenswrapper[4953]: I0223 00:33:17.837451 4953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2112331a329a3981bc6a12031003a3695b1153ac849c57423c8f47d00251e6c9"} err="failed to get container status \"2112331a329a3981bc6a12031003a3695b1153ac849c57423c8f47d00251e6c9\": rpc error: code = NotFound desc = could not find container \"2112331a329a3981bc6a12031003a3695b1153ac849c57423c8f47d00251e6c9\": container with ID starting with 2112331a329a3981bc6a12031003a3695b1153ac849c57423c8f47d00251e6c9 not found: ID does not exist" Feb 23 00:33:19 crc kubenswrapper[4953]: E0223 00:33:19.328699 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:33:19 crc kubenswrapper[4953]: I0223 00:33:19.337127 4953 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9362e25d-f156-4bbe-b1ec-92784c3e497b" path="/var/lib/kubelet/pods/9362e25d-f156-4bbe-b1ec-92784c3e497b/volumes" Feb 23 00:33:21 crc kubenswrapper[4953]: E0223 00:33:21.328216 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:33:33 crc kubenswrapper[4953]: E0223 00:33:33.335950 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:33:34 crc kubenswrapper[4953]: E0223 00:33:34.328794 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:33:44 crc kubenswrapper[4953]: I0223 00:33:44.699691 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:33:44 crc kubenswrapper[4953]: I0223 00:33:44.700638 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:33:48 crc kubenswrapper[4953]: E0223 00:33:48.329635 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:33:48 crc kubenswrapper[4953]: E0223 00:33:48.329677 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:34:00 crc kubenswrapper[4953]: E0223 00:34:00.330589 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:34:00 crc kubenswrapper[4953]: E0223 00:34:00.330830 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:34:11 crc kubenswrapper[4953]: E0223 00:34:11.329945 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:34:14 crc kubenswrapper[4953]: I0223 00:34:14.700431 4953 patch_prober.go:28] interesting pod/machine-config-daemon-gpl86 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 00:34:14 crc kubenswrapper[4953]: I0223 00:34:14.700963 4953 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 00:34:14 crc kubenswrapper[4953]: I0223 00:34:14.701051 4953 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" Feb 23 00:34:14 crc kubenswrapper[4953]: I0223 00:34:14.702108 4953 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d423795e6d3a507cb818ec0c46541705d42f1d4dd4b8b8af87432826a5a4b168"} pod="openshift-machine-config-operator/machine-config-daemon-gpl86" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 00:34:14 crc kubenswrapper[4953]: I0223 00:34:14.702208 4953 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerName="machine-config-daemon" containerID="cri-o://d423795e6d3a507cb818ec0c46541705d42f1d4dd4b8b8af87432826a5a4b168" gracePeriod=600 Feb 23 00:34:14 crc kubenswrapper[4953]: E0223 00:34:14.834723 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpl86_openshift-machine-config-operator(fca7afa5-1274-436d-ab61-9e8796e4774c)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" Feb 23 00:34:15 crc kubenswrapper[4953]: I0223 00:34:15.174778 4953 generic.go:334] "Generic (PLEG): container finished" podID="fca7afa5-1274-436d-ab61-9e8796e4774c" containerID="d423795e6d3a507cb818ec0c46541705d42f1d4dd4b8b8af87432826a5a4b168" exitCode=0 Feb 23 00:34:15 crc kubenswrapper[4953]: I0223 00:34:15.174827 4953 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" event={"ID":"fca7afa5-1274-436d-ab61-9e8796e4774c","Type":"ContainerDied","Data":"d423795e6d3a507cb818ec0c46541705d42f1d4dd4b8b8af87432826a5a4b168"} Feb 23 00:34:15 crc kubenswrapper[4953]: I0223 00:34:15.174870 4953 scope.go:117] "RemoveContainer" containerID="7bbb1432a118683a73efc7d5fa3c37ea9fc9b879091deb6f1bba6e886fc0c71f" Feb 23 00:34:15 crc kubenswrapper[4953]: I0223 00:34:15.175950 4953 scope.go:117] "RemoveContainer" containerID="d423795e6d3a507cb818ec0c46541705d42f1d4dd4b8b8af87432826a5a4b168" Feb 23 00:34:15 crc kubenswrapper[4953]: E0223 00:34:15.176409 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpl86_openshift-machine-config-operator(fca7afa5-1274-436d-ab61-9e8796e4774c)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" Feb 23 00:34:15 crc kubenswrapper[4953]: E0223 00:34:15.327829 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:34:25 crc kubenswrapper[4953]: E0223 00:34:25.331644 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:34:28 crc kubenswrapper[4953]: I0223 00:34:28.330197 4953 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 00:34:28 crc kubenswrapper[4953]: E0223 00:34:28.388018 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 23 00:34:28 crc kubenswrapper[4953]: E0223 00:34:28.388601 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mnzrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-hwwhn_service-telemetry(7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 23 00:34:28 crc kubenswrapper[4953]: E0223 00:34:28.390270 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:34:30 crc kubenswrapper[4953]: I0223 00:34:30.327547 4953 scope.go:117] "RemoveContainer" containerID="d423795e6d3a507cb818ec0c46541705d42f1d4dd4b8b8af87432826a5a4b168" Feb 23 00:34:30 crc kubenswrapper[4953]: E0223 00:34:30.328064 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpl86_openshift-machine-config-operator(fca7afa5-1274-436d-ab61-9e8796e4774c)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" Feb 23 00:34:39 crc kubenswrapper[4953]: E0223 00:34:39.378358 4953 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Feb 23 00:34:39 crc kubenswrapper[4953]: E0223 00:34:39.381571 4953 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nc9kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-szrd7_service-telemetry(fcc5fd41-349f-4826-8b86-e7aef82e8f7b): ErrImagePull: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Feb 23 00:34:39 crc kubenswrapper[4953]: E0223 00:34:39.382927 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:34:40 crc kubenswrapper[4953]: E0223 00:34:40.328803 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:34:44 crc kubenswrapper[4953]: I0223 00:34:44.326407 4953 scope.go:117] "RemoveContainer" containerID="d423795e6d3a507cb818ec0c46541705d42f1d4dd4b8b8af87432826a5a4b168" Feb 23 00:34:44 crc kubenswrapper[4953]: E0223 00:34:44.329137 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpl86_openshift-machine-config-operator(fca7afa5-1274-436d-ab61-9e8796e4774c)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" Feb 23 00:34:52 crc kubenswrapper[4953]: E0223 00:34:52.329593 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:34:55 crc kubenswrapper[4953]: E0223 00:34:55.329762 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:34:57 crc kubenswrapper[4953]: I0223 00:34:57.326425 4953 scope.go:117] "RemoveContainer" containerID="d423795e6d3a507cb818ec0c46541705d42f1d4dd4b8b8af87432826a5a4b168" Feb 23 00:34:57 crc kubenswrapper[4953]: E0223 00:34:57.326863 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpl86_openshift-machine-config-operator(fca7afa5-1274-436d-ab61-9e8796e4774c)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" Feb 23 00:35:05 crc kubenswrapper[4953]: E0223 00:35:05.330397 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:35:10 crc kubenswrapper[4953]: E0223 00:35:10.331696 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:35:11 crc kubenswrapper[4953]: I0223 00:35:11.327581 4953 scope.go:117] "RemoveContainer" containerID="d423795e6d3a507cb818ec0c46541705d42f1d4dd4b8b8af87432826a5a4b168" Feb 23 00:35:11 crc kubenswrapper[4953]: E0223 00:35:11.327995 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpl86_openshift-machine-config-operator(fca7afa5-1274-436d-ab61-9e8796e4774c)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" Feb 23 00:35:19 crc kubenswrapper[4953]: E0223 00:35:19.329539 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:35:22 crc kubenswrapper[4953]: I0223 00:35:22.326994 4953 scope.go:117] "RemoveContainer" containerID="d423795e6d3a507cb818ec0c46541705d42f1d4dd4b8b8af87432826a5a4b168" Feb 23 00:35:22 crc kubenswrapper[4953]: E0223 00:35:22.327810 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpl86_openshift-machine-config-operator(fca7afa5-1274-436d-ab61-9e8796e4774c)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" Feb 23 00:35:25 crc kubenswrapper[4953]: E0223 00:35:25.330345 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:35:32 crc kubenswrapper[4953]: E0223 00:35:32.177675 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:35:34 crc kubenswrapper[4953]: I0223 00:35:34.326424 4953 scope.go:117] "RemoveContainer" containerID="d423795e6d3a507cb818ec0c46541705d42f1d4dd4b8b8af87432826a5a4b168" Feb 23 00:35:34 crc kubenswrapper[4953]: E0223 00:35:34.326711 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpl86_openshift-machine-config-operator(fca7afa5-1274-436d-ab61-9e8796e4774c)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" Feb 23 00:35:39 crc kubenswrapper[4953]: E0223 00:35:39.328793 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:35:45 crc kubenswrapper[4953]: E0223 00:35:45.328757 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:35:49 crc kubenswrapper[4953]: I0223 00:35:49.335444 4953 scope.go:117] "RemoveContainer" containerID="d423795e6d3a507cb818ec0c46541705d42f1d4dd4b8b8af87432826a5a4b168" Feb 23 00:35:49 crc kubenswrapper[4953]: E0223 00:35:49.336124 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpl86_openshift-machine-config-operator(fca7afa5-1274-436d-ab61-9e8796e4774c)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" Feb 23 00:35:53 crc kubenswrapper[4953]: E0223 00:35:53.335602 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:35:58 crc kubenswrapper[4953]: E0223 00:35:58.329235 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" Feb 23 00:36:04 crc kubenswrapper[4953]: I0223 00:36:04.326949 4953 scope.go:117] "RemoveContainer" containerID="d423795e6d3a507cb818ec0c46541705d42f1d4dd4b8b8af87432826a5a4b168" Feb 23 00:36:04 crc kubenswrapper[4953]: E0223 00:36:04.328010 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpl86_openshift-machine-config-operator(fca7afa5-1274-436d-ab61-9e8796e4774c)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpl86" podUID="fca7afa5-1274-436d-ab61-9e8796e4774c" Feb 23 00:36:07 crc kubenswrapper[4953]: E0223 00:36:07.330407 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-hwwhn" podUID="7082ef46-4c7d-4d6d-b0c2-c6d341bfbc81" Feb 23 00:36:09 crc kubenswrapper[4953]: E0223 00:36:09.327874 4953 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-szrd7" podUID="fcc5fd41-349f-4826-8b86-e7aef82e8f7b" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515146720406024452 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015146720406017367 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015146714475016523 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015146714475015473 5ustar corecore